War Powers, War Lies: Part 3: Tonkin Spook

The Big Picture Home Page | Previous Big Picture Column |  Next Big Picture Column 

War Powers Page | Previous War Powers ColumnNext War Powers Column

War Powers, War Lies, A Series

Part 3: Tonkin Spook

Published in the Maryland Daily Record March 25, 2005

          The night of August 4, 1964 was dark and drizzly over the Gulf of Tonkin, which lies between China and North Vietnam.  Two U.S. destroyers, the Turner Joy and the Maddox, were on patrol there that night.  These waters were not familiar to the U.S. sailors.  In particular, the radiomen aboard were totally unacquainted with a well-documented if never well-understood local meteorological condition known as Tonkin Spook.  This manifests itself by radar readings of craft that are not there.  These “ghosts” appear real and constant for brief periods of time, a minute or two, and then disappear, perhaps to reappear elsewhere in a short while. 

          The military mission that had brought these mariners to share the Gulf with apparitions that night has never been reasonably explained, but it was likely primarily a matter of creating a provocation.  Lyndon Johnson and his Administration had been looking for justification to expand the size of the U.S. military contingent in Vietnam, and to adopt an explicitly offensive role toward North Vietnam, since at least June.  This search for a rationale sprang from the stark realization that, public declarations of confidence notwithstanding, the South Vietnamese government and military were not so slowly collapsing, in response to the pressure exerted by the native insurrectionists the Vietcong, by North Vietnamese soldiers being infiltrated into the South down the Ho Chi Minh Trail, and, by the corruption, instability and ineptness of the South Vietnamese government.  Under these circumstances, a North Vietnamese attack on U.S. shipping could lend justification to the plans of Johnson and his advisors to take the war to the North and to Americanize the conflict in the South.

           The Maddox had actually been attacked by, and beaten back, a squadron of three North Vietnamese PT boats two days earlier.  The Administration had chosen not to make an issue of that engagement, probably because it might have been hard to convince the world the attack was unprovoked.  An amphibious raid, nominally South Vietnamese, but American in reality, at least to the extent of planning and supply, and possibly extending to the covert presence of Navy SEALS on the mission, had been operating in North Vietnam earlier in the day, in a location far closer to the Maddox’s real course than the U.S. officially admitted.  Reasonably, though mistakenly, the North Vietnamese apparently took the two operations as coordinated, and responded accordingly.  Johnson’s war counselors would have understood that the rest of the world might draw the same conclusion, and did not press the point.

           But in what happened on the night of August 4, further north, there was no such distraction from the Administration’s storyline.  Of course, there was also no North Vietnamese attack, just Tonkin Spook, although, without doubt, the sailors involved certainly believed there had been.  You can read the meticulous dissection of the mass hallucination and the chaos it led to, in Professor Edwin Moïse’s book Tonkin Gulf and the Escalation of the Vietnam War (Chapel Hill 1996).  This was truly fog of war at its worst.  The destroyers blasted away for hours at the nonexistent attackers, the sonarmen interpreted the destroyers’ own screw noise or “knuckle” as belonging to hostile PT boats, and panicked lookouts spotted nonexistent incoming torpedoes.  Sailors were shaken up, and one injured by the destroyers’ abrupt evasive movements.  At one point the Maddox’s 5-inch guns locked onto its own sister ship and were prepared to let fly when two alert fire control technicians refused to obey orders to fire.  A minute later, when upon request the Turner Joy flashed its truck lights, it became evident the technicians had been correct: one U.S. warship had come within a whisker of blasting another U.S. warship out of the water.  Meanwhile, fighter pilots streaked overhead and continued to report that they could see no hostile craft anywhere, but they were ignored.

           Given all this chaos, the initial reports filtering back naturally gave some credibility to the notion that the ships had been attacked, but LBJ knew better from the first.  He told Undersecretary of State George Ball: “Hell, those dumb, stupid sailors were just shooting at flying fish!”  This awareness changed nothing.  The word was filtered down to write after-action reports supporting the notion that the North Vietnamese had tried to sink U.S. vessels, even after the captain of the Maddox urged that a “complete evaluation” be done before reaching that conclusion.  The required reports duly appeared, in the teeth of huge and well-justified misgivings by almost everyone involved in writing them.  This much everyone agreed on: there had been no visual sightings of the “bandits,” and everyone knew the North Vietnamese could never summon up a fleet of the size the radar showed had been attacking.  Most reasonable military men understood that the courses plotted for the “bandits” were impossible.  But the White House and the Navy “brass” in Honolulu wanted reports written up a certain way, and they were. 

           Secretary of Defense Robert McNamara was in the meantime sent to brief the press and the Congress and told a series of “whoppers”: that the North Vietnamese had illuminated the destroyers with searchlights, that they had bombarded the destroyers with guns far larger than he had any reason to believe their navy possessed, that the destroyers were far from the coast when in fact they were close.  Johnson made a speech the next day in which he described the supposed attacks as “aggression, deliberate, willful and systematic.”  And he claimed “complete and incontrovertible evidence” that the attacks had occurred.  Press coverage followed the official line, even when errors and contradictions were apparent.

           Johnson was quite conscious that when Harry Truman had led the U.S. into the Korean War, he had done so without explicit Congressional authorization, which had proved a liability.  But Johnson thought of it as a political, not a Constitutional, liability.  He told McNamara: “By God, I’m going to be damned sure those guys are with me when we begin this thing, or they may try to desert me after I get in there.”  How Johnson pursued this goal has been well described in Eric Alterman’s 2004 book When Presidents Lie.  Johnson sent Congress what he called the Joint Resolution to Promote the Maintenance of International Peace and Security in Southeast Asia, which came to be known simply as the Gulf of Tonkin Resolution, H.R.J.RES., 88th Cong., 2d Sess., 78 Stat. 384 (1964).  Congress passed it three days after the supposed incident.  Section I of the Resolution, the “business end” of the enactment, commended to the President, “as Commander in Chief,” the authority “to take all necessary measures to repel any armed attack against the forces of the United States and to prevent any further aggression.”

           As quoted above, Johnson planned to use this broadly worded authorization as a “blank check” for any escalation of the conflict he might desire.  But he sold it to Congress as something less.  His floor manager was Senator William Fulbright, later a tenacious critic of the war, but at that point still a friend and confidant.  Johnson saw to it that Fulbright was walled off from knowledge like the covert amphibious raids, or the comments of the Maddox’s skipper about needing a “complete evaluation,” and also from all indications of Johnson’s true design. Accordingly, when dissenters in the Senate warned that the broad language could be used to authorize a huge expansion of the War, Fulbright assured them that “Everyone [in the Administration] I have heard has said that the last thing we want to do is become involved in a land war in Asia.”  Senator Ernest Gruening of Alaska warned that what was really at stake was “a predated declaration of war,” and Senator Wayne Morse of Oregon observed that “history will record that we have made a great mistake in subverting and circumventing the Constitution of the United States” by giving the President “warmaking powers in the absence of a declaration of war.”  But out of all 535 members of Congress they were in the end the only dissenters.

           And, of course, they were right.

            Gruening and Morse were right that the Resolution would be used as the equivalent of a declaration of war because, of course, the Viet Cong and North Vietnamese were winning, and they were not about to give up.  As they intensified their attacks on the South in late 1964, and as the South Vietnamese government went through two coups in early 1965 (there was another before the year was out), Johnson deemed it imperative to commit massive forces, first continuous air attacks on the North known as Rolling Thunder (an operation which went on for three years), and almost simultaneously large-scale augmentation of the corps of U.S. military “advisors” on the ground.  The new troops were there to fight, not to advise.  There were 200,000 U.S. troops in Vietnam at the end of 1965, 400,000 at the end of 1966, and 500,000 at the end of 1967.  It was an American war.

           And not just a war, but a war of the worst kind, built on lies: lies about what we were fighting for, lies about the Tonkin incident, lies about the nature of the instrument that Congress executed when it approved the Tonkin Resolution.  Lies that would cost the U.S. 57,685 killed and about 153,303 wounded. Lies whose unmasking, as happened throughout the proceedings but especially when the so-called Pentagon Papers were published, left the country hopelessly and tragically divided, torn by protests and riots and immolations and responsive police and militia brutality.  It was about as bad as a war not fought on U.S. soil could get.

           Gruening and Morse were also right about the end run around the Constitution.  Because even if, as discussed last time, Congress could constitutionally authorize an “imperfect war” with an imperfect declaration like the Tonkin Resolution, that second-best form of declaration at least needed to be understood as such to gain legitimacy. Johnson’s men knowingly obfuscated to prevent such knowledge.  And predictably, when objections were raised to the legitimacy of the war, the Resolution was raised as a defense by both Johnson and his successor Richard Nixon.  It had been bait-and-switch after all.

           To be fair, Congress did more than just pass the Resolution.  It also authorized war-specific appropriations in 1965, continued to fund the military throughout, and passed extensions of the draft.  But it was the Tonkin Resolution, above all, that Johnson and later on President Nixon pointed to as their authority.  And courts frequently found that the Resolution, with or without subsequent Congressional acts, was the Constitutional equivalent of a declaration of war.

           The Resolution itself was eventually repealed under Nixon, tacked quietly onto a trade bill in January 1971, just as the American forces were beginning to be withdrawn.  But the American deployment the Resolution had initiated would not fully end for another two years.

           One of the Navy pilots who had been streaking overhead the night of August 4, 1964 was Commander James Stockdale, who would later rise to the rank of Vice Admiral, and would still later become a Vice Presidential candidate along with Ross Perot.  In a memoir quoted in Eric Alterman’s book at Page 237, Stockdale recalled being visited on the flight deck a few days after the “incident” by an assistant to McNamara.  The assistant told him: “We were sent out here just to find out one thing.  Were there any fuckin’ boats out there the other night or not?”  That question, Stockdale mused, “said it all.”  He could “stand right there in the cabin and write the script of what was to come: Washington’s second thoughts: the guilt, the remorse, the tentativeness, the changes of heart, the back-out.  And a generation of young Americans would get left holding the bag.”  Stockdale should know about holding the bag: the next year he would be shot down and spend seven and a half years as a North Vietnamese prisoner of war subject to routine torture.  He would be kept in solitary confinement for four years.  He would be held in leg irons for two years.  He had to go through that and more because in the end McNamara’s men did not really care whether there had been any boats or not, and McNamara’s boss LBJ did not care about telling Congress what he was asking for.

           That lack of care had a long and distinguished pedigree, much of which Johnson and his men had to know.  Presidents had been lying about war and not caring about it throughout our history, and we have all been helping them out by lying to ourselves.  Next time we will review some of that story.

Copyright (c) Jack L. B. Gohn

The Big Picture Home Page | Previous Big Picture Column |  Next Big Picture Column 

War Powers Page | Previous War Powers ColumnNext War Powers Column

War Powers, War Lies: Part 2: Imperfect War

The Big Picture Home Page | Previous Big Picture Column |  Next Big Picture Column 

War Powers Page | Previous War Powers ColumnNext War Powers Column

War Powers, War Lies: A Series

 Part 2: Imperfect War  

 

Published in the Maryland Daily Record February 25, 2005

 

          Last time, I recounted how the Framers of our Constitution had truly believed that in Article I, Section 8, Clause 11 (which conferred upon Congress the power “to declare War”), they had committed to Congress the exclusive power to control the deployment of U.S. armed forces in armed combat.  Obviously, things have worked out very differently indeed; warmaking is usually primarily an executive decision.  So what happened? In part, the Framers had not explicitly provided for Imperfect War. 

        

          History does chronicle some early Presidential acceptance of the Congressional authority to initiate wars contemplated by the Framers.  This was right at the beginning of the history of the Republic.  President Jefferson, in his attempts to protect American shipping from the Tripolitan or Barbary pirates, did not go on full offensive until after seeking, and receiving Congressional authorization.  The words of his State of the Union address to Congress, in which he requested the authorization, are worth quoting at length: 

  Unauthorized by the Constitution, without the sanction of Congress, to go beyond the line of defense, [an American vessel captured by the pirates], being disabled from committing further hostilities, was liberated with its crew.  The Legislature will doubtless consider whether, by authorizing measures of offense also, they will place our force on an equal footing with that of its adversaries.  I communicate all material information on this subject, that in the exercise of this important function confided by the Constitution to the Legislature exclusively, their judgment may form itself on a knowledge and consideration of every circumstance of weight.[1]

  There is a similar flavor to the words of President James Buchanan, in 1858, suggesting that Congress allow him to open a Nicaraguan railway, closed by political upheaval: 

  The executive government of this country in its intercourse with foreign nations is limited to the employment of diplomacy alone.  When this fails it can proceed no further.  It can not legitimately resort to force without the direct authorization of Congress, except in resisting and repelling hostile attacks.  It would have no authority to enter the territories of Nicaragua even to prevent the destruction of the transit and protect the lives and property of our own citizens on their passage.[2]

           But times have changed since then.  As of 1970 there had been as many as 125 instances of armed action initiated by American presidents without prior formal congressional declaration of war.[3]  This list included everything from landings of small parties of Marines in various banana republics to major troop movements.  Some of the most conspicuous were listed in a 1972 law review article by Eugene Rostow:[4] they included: Commodore Perry’s expedition to Japan and those which followed it; the array of 50,000 troops in Texas during 1865 and 1866 to support our diplomatic suggestion that France withdraw from Mexico; the participation of American forces in the hostilities following the Boxer Rebellion in China in 1900 and 1902; participation in hostilities with Mexico between 1914 and 1917; various deployments and uses of force by Woodrow Wilson and Franklin Roosevelt before both World Words; and occupations of Haiti, the Dominican Republic and Nicaragua.  In all that time, by contrast, there had only been five Congressional declarations of war.[5] 

           If war was something Congress declares, and Congress didn’t declare them, then these uses of force, for all that they resembled war, were apparently not war. 

           What were they then?  Classical legal theory recognized certain deployments and uses of force with such strange names as reprisals and retorsions, in fact the Constitution even conferred upon Congress authority regarding reprisals.  These were examples of what the legal theorists of the time called “imperfect war,” i.e. war carried on without declaration, but which could legitimately be pursued without declaration.  Had the Framers directly addressed the whole question of “imperfect war,” the little matter of who controlled it would have been far clearer. 

           The first important legal test in this area was Bas v. Tingy, 4 U.S. (4 Dallas) 37, 1 L.Ed. 731 (1800).  During President John Adams’ administration, the Nation found itself in an undeclared sea war with France, in the course of which one of our merchant ships had been captured by the French, and then retaken by an American “public armed ship.”  Under a 1799 law, passed in response to the French sea war, public armed ships retaking a merchant ship captured by an “enemy” were entitled to half the value of the merchant ship and the cargo.  The owner of the merchant ship in question here challenged the applicability of the 1799 law, because, he argued, there was no declaration of war between the United States and France, and thus France could not be an “enemy.”  The owner of the public armed ship of course took the opposite view.  And the Supreme Court agreed with him. 

           Justice Bushrod Washington, speaking for the Court, refused to treat with any seriousness the contention that America and France were not at war because of the absence of a declaration.  He conceded that they were not in a condition he called “perfect” or “solemn” war.  But, he added: 

  …hostilities may subsist between two nations, more confined in its nature and extent, being limited as to places, persons, and things; and this is more properly termed imperfect war; because not solemn, and because those authorized to commit hostilities, act under special authority, and can go farther than to the extent of their commission.  Still, however, it is a public war, because it is an external contention by force between some of the members of the two nations, authorized by the legitimate powers. 

  The key words here are “authority” and “authorized,” which are used three times in this brief excerpt.  It was Justice Washington’s holding that the hostilities with France had been authorized, by Congress, through the very law under which the public armed ship was proceeding; that when Congress authorized certain ships to proceed against French shipping, it was authorizing a partial war. 

           Thus the Supreme Court’s solution to the problem here was to rely on Congressional authorization of war as a sufficient substitute for Congressional declaration of war.  The Court never directly addressed the ship-owner’s point that the Constitution speaks only of declarations of war, not of authorizations. 

            In Bas v. Tingy, the Supreme Court expanded the concept of constitutionally permissible war to include imperfect war.  In the next major test, known as The Prize Cases, 2 Black 635, 17 L.Ed. 459 (1862), the Court shrank the concept of war itself, thereby restricting the importance of Congress’ exclusive franchise to declare or (after Bas v. Tingy) authorize it.  With the outbreak of the Civil War, President Lincoln had proclaimed a blockade of the Confederate ports, and certain ships had been captured by the United States while trying to run the blockade.  The owners of the ships and their cargos had sued for their return, on the basis that Lincoln’s blockade of the Southern ports was an act of war, and Congress had declared no war.  They were right, of course; blockades are acts of war, and Congress had not declared one.  But the Supreme Court was not about to deny Lincoln his right to conduct a blockade – or any other act of war. 

           In according Lincoln his right to fight the Civil War as he pleased, the Supreme Court, through Justice Grier, started by acknowledging the obvious: that the Civil War was in fact a war.  But, added the Court, civil wars are special, in that invariably one side claims to be sovereign nation and the other contests that claim.  Since the declaration of war is a formality a sovereign nation pays only to another sovereign nation, it follows that “a civil war is never solemnly declared.” 

           The Court reinforced this analysis with two other points: that this was, in any case, a defensive war, since the South had commenced hostilities by firing on Fort Sumter, and “the President is not only authorized but bound to resist force, by force.  He does not initiate the war, but is bound to accept the challenge without waiting for any special legislative authority.”  Secondly, as in Bas v. Tingy, various acts of the Congress enabling the prosecution of the war amounted to a ratification which, though not in the form of a declaration of war, nonetheless satisfied any Constitutional deficiency. 

            This result was the only result the Court could have reached, because they were certainly right on the second point – that a war against the United States was raging, and no declaration was necessary for a defensive war.  But the other two points were unnecessary to reach this result, and were pregnant either with possibility or with mischief, depending on your point of view.  First, the Court was ignoring the historical fact that Lincoln had deliberately kept the Congress from convening and had refrained from consulting with Congress until two months had elapsed after Fort Sumter, and that he could in fact have sought Congressional approval of the blockade.  The Court instead gratuitously established the notion that war against non-sovereign entities need not be declared, as well as carrying the rule of Bas v. Tingy one step further, holding that Congress could after the fact enact authorizing legislation which would provide the moral equivalent of a declaration of war, which could serve to ratify what the President had already done. 

            After this, many Congressionally-unauthorized military adventures occurred.  And Congress by and large did nothing.  The courts by and large did nothing.  The emerging consensus at that point was described by scholar Robert William Russell: 

  …there was one opinion that enjoyed wide acceptance: the President could constitutionally employ American military force outside the nation as long as he did not use it to commit “acts of war.”  While the term was never precisely defined, an “act of war” in this context usually meant the use of military force against a sovereign nation without that nation’s consent and without that nation’s having declared war upon or used force against the United States.[6] 

 There are two key elements in this analysis, both of which greatly limit in number the sorts of hostilities constituting acts of war which require a declaration before they may be pursued.  The first is that war can only exist between sovereign nations: thus the civil War, against insurgents, was not covered, and by extension military action against any group not already recognized as a sovereign government, was not covered.  Secondly, to amount to war, the use of force must be unprovoked by any use of force against the United States, a position which sounds innocuous, until it is coupled with various assertions of American interests around the world, attacks against which constitute attacks against the United States, justifying Congressionally-unauthorized armed response. 

           An early instance of the latter exception – and one of the few times that Congress tried to protect its turf – was the prologue to the Mexican American War.  When the United States annexed Texas in 1845, it thereby claimed all territory as far south as the Rio Grande.  President Polk directed General Zachary Taylor to occupy Texas and treat any Mexican incursion north of the Rio Grande as an invasion, upon which he could enter Mexican territory in pursuit of invaders.  Taylor did as he was told, and there were two skirmishes before full-scale hostilities erupted, and war was declared.  Later on, in passing a resolution thanking Taylor, Congress nonetheless stated that “the war was unnecessarily and unconstitutionally begun by the President of the United States.”[7] 

           This was the exception rather than the rule, and essentially Congress went along with each Presidential deployment of military force for generations, as each came in some fashion within one of the exceptions summarized by Russell: either the enemy was not a sovereign, or the action could in some way be justified as a response to an attack on United States interests. 

           The Framers would already have been spinning in their graves.  But worse was to come. 


[1]              First Annual Message to Congress, December 8, 1801, 1 A COMPILATION OF THE MESSAGES AND PAPERS OF THE PRESIDENTS, 1789-1897, 326-27 (ed. J. Richardson 1898). 

[2]              Second Annual Message to Congress, December 6, 1858, 5 A COMPILATION OF THE MESSAGES AND PAPERS OF THE PRESIDENTS, 1789-1897, 516 (ed. J. Richardson 1898). 

[3]          Davi v. Laird, 318 F.Supp. 478, 480-81 (W.D. Va. 1970). 

[4]             E. Rostow, Great Cases Make Bad Law: The War Powers Act, 50 TEX.L.REV. 833 (1972). 

[5]              See, Note: Congress, the President, and the Power to Commit Forces to Combat, 81 HARV.L.REV. 1771, 1786 n. 81 (1968).  These wars were the War of 1812, the Mexican-American War, the Spanish-American War, and World Wars One and Two. 

[6]            R. Russell, The United States Congress and the Power to Use Military Force Abroad (Ph.D. Thesis, Fletcher School of Law and Diplomacy, 1967), quoted in, G. Gunther, CASES AND MATERIALS ON CONSTITUTIONAL LAW 437 (9th ed. 1975). 

[7]           CONG. GLOBE, 30th Cong., 1st Sess. 95, 343-44 (1846), cited in, Note, 81 HARV.L.REV. at 1780, which is the source for this part of the discussion. 

Copyright (c) Jack L. B. Gohn

The Big Picture Home Page | Previous Big Picture Column |  Next Big Picture Column 

War Powers Page | Previous War Powers ColumnNext War Powers Column 

War Powers, War Lies: Part 1: Original Intent

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

War Powers Page   Next War Powers Column

War Powers, War Lies: A Series

Part 1: Original Intent

Published in the Maryland Daily Record February 4, 2005  

          The U.S. is at war.  Our soldiers die daily, our treasure is poured out, and our international prestige hemorrhages.  No one has asked us citizens if we desire it.  No one has asked Congress, or at least not properly.  No one has leveled with us, and especially no one leveled with us when it could have mattered.  And daily we are fed a diet of lies.  Welcome to war, American style, as it has come to be.  In the next few months, I want to talk about the convergence of two of the very worst trends in our Constitutional culture, Presidential usurpation of war powers, and Presidential lies, and how they have jointly brought about the kind of wars we keep having.

            Conservatives since World War II have reliably proved among the staunchest supporters of the Presidents who have led us into war.  Conservatives also typically affect to favor strict construction of our Constitution and/or “original intent,” i.e. what the Framers meant, rather than what so-called activist judges wish it said.  But in practice conservatives acquiesce in the total perversion of one provision, Article I, Section 8, Clause 11, which reserves to Congress, and to Congress alone, the power “to declare War.”  We know that both the literal meaning and the original intent here were crystal clear.  Conservatives forget.  Let me retell the story.

            In the long, hot summer of 1787, when the delegates to the Philadelphia Constitutional Convention sweltered over the task of forging a nation, they disputed vehemently over almost every detail, including many having to do with the Nation’s armed forces, as, for instance the degree to which the federal government could control the state militias (they compromised), whether there should be standing armies (they compromised), and which branch of the Government should be entrusted with commissioning regular army officers (Congress won).  But there was one military issue on which they were nearly unanimous.

            On August 17, 1787, the Convention received a recommendation from its Committee of Detail which suggested that Congress should be vested with the power “To make war.”[1]  The recommendation gave rise to a quick, friendly discussion.  Charles Pinkney of South Carolina thought that the language was too broad, since Congress would meet but once a year, and could not wield such wide power, – could not micromanage, in today’s terminology.[2]  He was seconded by Pierce Butler, also of South Carolina, who suggested vesting the power in the President, supporting his suggestion with an argument which is highly ironic in hindsight, that the President “will not make war but when the Nation will support it.”

            At this James Madison of Virginia and Elbridge Gerry of Massachusetts moved to substitute the word “declare” for “make..”  Roger Sherman of Connecticut disagreed, on the basis that the Executive should be limited to “the power to repel sudden attacks.”  Gerry responded sharply that he “never expected to hear in a republic a motion to empower the Executive alone to declare war.”  After further discussion friendly to the amendment, in which it became clear that the consensus of the Convention was that making peace came more within the scope of the Executive than of the Legislative branch, the amendment was put to a vote.[3]  It carried, by a margin of two States.  Two recent commentators, Christopher and James Lincoln Collier, have remarked that: “[t]here was virtually no other important question on which the Convention was so solidly in agreement as that the power to declare war be exercised by the Congress, and not the president.”[4]

            This is not to say that the Convention wanted to render the President powerless in matters military.  It had before it various proposals rendering the President “Commander In Chief of the Armed Forces,” and endorsed them without discussion, on August 27.[5]  But in committing the power to declare war to the Legislative branch of the Government, the Convention had clearly intended to accord the Legislative branch supremacy in the process of deciding how and when to use the Nation’s military powers offensively.

            We know this because the international law of the time – and indeed Western international law of every time since at least the Roman Empire – had condemned as illegitimate all hostilities between sovereign nations not preceded by formal declarations of war.  There was simply no dissenting voice in the legal authorities of the time.  Thus Pierino Belli, a Renaissance expert on military matters, simply took the preliminary of a declaration of war as a given, and went on to comment that:

… simple common sense declares that it is right that some lapse of time intervene, in which a person may prepare himself and get ready for defense.  For scarcely a man would be excused from the charge of deceit and treachery who declared war and almost simultaneously made an attack.[6]

Alberico Gentile, a scholar writing half a century later, summarized the vast authority from the ancients up to his own time behind this simple proposition.  He wrote that:

….Wars must be waged with no less justice than bravery [said Church Father Camillus].  And God thus ordained in his law.  And this justice of which we speak seems in the first place to consist in this: that we should inform of our deliberations the one against whom we have decided to make war.  That is indeed commanded in the divine law, a law which related to all men and not to the Jews alone: for it is a law not confined to their commonwealth, but extending beyond it.

            Others too have come to the same conclusion.  Greeks, Barbarians, and especially the Romans.  “It seems that no war can be regarded as just unless it has been announced and declared, and unless satisfaction has been demanded, as Cicero writes.[7]

Writing yet half a century further, Samuel Pufendorf, a specialist in natural law whose writings were part of any well-equipped law library, never addressed the issue, because, as was clear from the context, he was unaware that anyone would raise the question as an issue: instead, he proceeded directly to a detailed discussion of the requisites of a declaration of war.[8] Vattell, a highly influential Frenchman of the Enlightenment, likewise flatly asserted that hostilities must be preceded by a declaration of war.[9]

            By way of postscript, it can be added that this notion was no mere quaint medievalism.  President Taft, not known as a starry-eyed dreamer living in the past, signed into law in 1909 this Nation’s ratification of the Hague Convention of 1907, Article III (Article I) of which reads:

The Contracting Powers recognize that hostilities between themselves must not commence without previous and explicit warning, in the form either of a reasoned declaration of war or of an ultimatum with conditional declaration of war.

            In short, the power to give or withhold a declaration of war was generally viewed by the Founders as tantamount to the power to decide whether hostilities would take place, with the well-recognized exception of defense against direct attack.  This placed the real war-making power in the hands of the Congress, not the President.  Alexander Hamilton discussed this in The Federalist, comparing the powers of the President with those of the King of England:

The most material points of difference are these– … the President is to be Commander in Chief of the army and navy of the United States.  In this respect his authority would be nominally the same with that of the King of Great-Britain, but in substance much inferior to it.  It would amount to nothing more than the supreme command and direction of the military and naval forces, as first General and Admiral of the confederacy: while that of the British King extends to the declaring of war and to the raising and regulating of fleets and armies; all of which the Constitution under consideration would appertain to the Legislature.

            A caveat: the Founders were aware that wars sometimes started without formal declarations.  Hamilton himself wrote in The Federalist that: “T]he ceremony of a formal denunciation of war has of late fallen into disuse ….”  W. Taylor Reveley, Dean of the William and Mary Law School, has written that “undeclared war was the norm in eighteenth-century European practice.”  This might well suggest that the Founders expected many wars to be undeclared, as one school of scholars suggests.  But virtually no authority contends that the Founders were thereby authorizing the Executive to start wars; whatever the vehicle involved, war commencement was intended to be a Congressional preserve.  And however accomplished, the initiation of war by Congress would be explicit.

            It is hard, in view of our Nation’s experience since Constitutional times, to believe that this neat division of powers could really have been intended, because on its face it seemed to make a declaration of war a slow and doubtful – maybe even an impossible – thing to achieve.  But this truly was the intent of the Framers.  James Wilson, one of the five members of the Committee of Detail, the man who actually wrote the text of the Constitution, when it came time for him to persuade the Pennsylvania legislature to ratify the Constitution, assured his fellow-Pennsylvanians that:

This system will not hurry us into war; it is calculated to guard against it.  It will not be in the power of a single man, or a single body of men, to involve us in such distress; for the important power of declaring war is vested in the legislature at large: this declaration must be made with the concurrence of the House of Representatives: from this circumstance we may draw a certain conclusion that nothing but our national interest can draw us into a war.[10]

 To like effect, 80 years later, Joseph Story, the undisputed preeminent legal scholar of his day, Justice of the Supreme Court and professor at the Harvard Law School, explaining why the Constitution made the declaration of war so difficult, commented that “…war is in its own nature and effects so critical and calamitous, that it requires the utmost deliberation, and the successive review of all the councils of the nation…. It should therefore be difficult in a republic to declare war….”[11]

            In short, a broad Congressional right to prior approval of foreign employment of military force was the original intent of the Founding Fathers.  There is simply no reasonable dispute.  But as we shall see next time and beyond, original intent was no match for the forces of history, not the least of which was Presidential usurpation.


[1].         James Madison’s Convention Journal, reproduced in 2 M. Farrand, ed., THE RECORDS OF THE FEDERAL CONVENTION 318 (rev. ed. 1966).

[2]           Pinkney added arguments based on the balance of powers between the states under bicameralism.

The Hs. of Reps. would be too numerous for such deliberations.  The Senate would be the best depositary, being more acquainted with foreign affairs, and most capable of proper resolutions.  If the States are equally represented in Senate, so as to give no advantage to large States, the power will notwithstanding be safe, as the small have all at stake in such cases as well as the large States.  It would be singular for one authority to make war, and another peace.

2 Farrand, supra, at 318.

[3]           Id. at 319.

[4]           C. Collier & J. L. Collier, DECISION IN PHILADELPHIA: THE CONSTITUTIONAL CONVENTION OF 1787, 330-31 (1986).

[5]           The text conferring Commander-in-Chief powers upon the president was placed befvore the Convention on August 6, 1787, id. at 184, and the August 27 vote is recorded id. at 427.  It appears in the Constitution at Article II, Section 2, Clause 1.

[6]           P. Belli, A TREATISE ON MILITARY MATTERS AND WARFARE IN ELEVEN PARTS 79 (tr. H. Nutting) (repr. 1964).  (Original Italian edition was 1563).

[7]           A. Gentile, DE JURE BELLI LIBRI TRES 131 (tr. J. Rolfe) (1964), citing Cicero’s On Duties, I [xi:36].  (This book was originally published 1612.)

[8]           2:ii S. Pufendorf, DE JURE NATURALE ET GENTIUM LIBRI OCTO 1387 (tr. C. & W. Oldfather) (1964).   (This book was originally published in 1688.)

[9]           M.D. Vattell, THE LAW OF NATIONS 383-84 (1805).

[10]         Comments in debate, November 21, 1787, 2 J. Elliott, ed., THE DEBATE IN THE SEVERAL STATE CONVENTIONS ON THE ADOPTION OF THE FEDERAL CONSTITUTION, AS RECOMMENDED BY THE GENERAL CONVENTION AT PHILADELPHIA IN 1787, 528 (2d ed. 1836, repr. 1937).

[11]         2 J. Story, COMMENTARIES ON THE CONSTITUTION, sec. 1171 at 97 (3d ed. 1858).

Copyright (c) Jack L. B. Gohn

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

War Powers Page   Next War Powers Column

The Disappeared Trial

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

Bad Judg(e)ment Page   Previous Bad Judg(e)ment Column

Bad Judg(e)ment: A Three-Part Series

 

Part Three: The “Disappeared” Trial

 

Published in the Maryland Daily Record September 24, 2004

 

          The last couple of times, I have been writing about individual traits in judges that can make them hard for us lawyers and our clients to live and work with.  Now I want to shift focus to an institutional trend in our courts that makes for bad judging, almost irrespective of anyone’s personality, although judges collectively have historically been the primary special interest agitating for it.  This is the trend to eliminate trials, to “disappear” them (if I may employ a piquant change from intransitive to transitive verb brought to us courtesy of various dirty wars in Latin America).

 

          Trials have been going away.  This is not just my perception.  The alarm on this has been sounded by the American Bar Association, which initiated a project on the Vanishing Trial, including a public form in December 2003, and continuing research directed by Marc Galanter, a professor at the University of Wisconsin and the London School of Economics.  Professor Galanter’s paper on the Vanishing Trial, jammed with interesting facts and figures, is set to appear in the Journal of Empirical Legal Studies in November.[1] What Galanter’s statistics tell us is that total trials are dropping, even as filings increase.

 

          The causes seem to be many and varied, including such things as the impact of sentencing guidelines that penalize criminal defendants for insisting on trying cases, the role of class actions (almost always headed for non-trial disposition), and the rise of mediation and arbitration.  The history of how this disappearance came about at the federal level, largely because of the organized judiciary’s overall agitation for case management and limitation on its own size and jurisdiction, is chronicled in a paper by Yale Law Professor Judith Resnik in the February 2000 issue of Harvard Law Review, and in another paper by Resnik also coming up in the November Journal of Empirical Legal Studies.

 

            On the civil front, the trend has given rise an interesting pair of developments, one characteristic of state courts, one of federal courts, which have facilitated much of the disappearing.  And bad judging fits right in.  Let me describe them and then illustrate with a couple of war stories.

 

          In state courts, where elected judges tend to predominate, the popular and populist course is to never permit defendants use summary judgment to escape trial, even in frivolous cases.  Viewed with a strong enough determination (a determination I am convinced is aroused in many a judicial mind by the desire to pander to an electorate), any set of facts will present a jury question.  The unspoken rationale, I strongly suspect, is that corporate defendants (backed by substantial treasuries and by insurers with even more substantial treasuries) will go to great lengths, even settling frivolous cases, in order to avoid the uncertainty of facing juries.   On the other hand, a plaintiff paid judicially-enabled extortion may well be a grateful voter.  And that plaintiff’s lawyer will probably be a contributor at campaign time.

 

          Standing by itself, the growing unavailability of summary judgment might tend to increase, not decrease, the number of trials, but it is coupled with another development that leads the other way, what I call “mediation hell.”  I recently experienced a fine example of this in an out-of-state courthouse that shall remain nameless.  This court has a voluntary mediation program in addition to a formally-required “settlement court.”  My defendant client and I agreed that: a) the case was frivolous, or close to it; b) we could nevertheless offer nuisance value to make the claim go away; and c) voluntary mediation was the place to do this.  Arriving at the courthouse, we found that the volunteer mediator had read neither side’s mandated paper submissions, that he had no intention of doing so now, that he wasn’t particularly interested in my client’s legal defenses, and would only place a suggested settlement value on the case based on a sense of the equities uninformed by any view of the defenses.  He placed a settlement range on the case of three to five times what we were offering.  Needless to say, there was no settlement that day.  Thereafter, our well-founded summary judgment motion was denied without a word of explanation.

 

          Then later it came time for the more official “settlement court.”  The settlement judge told us firmly that “settlements happen here,” and that she was not inclined to second-guess the value placed on the case by the volunteer mediator who had not read the papers, and that like that mediator, she had no interest in – literally told us she did not even want to hear about – our legal defenses.  When we did not settle after an hour of her browbeating, we were told to come back in a week, because she was not finished with us.  Coming back in a week, she asked for the first time about our insurance, and when we told her that we had coverage, albeit coverage that would barely and only arguably kick in even if everything went dreadfully awry, she announced we had to come back with an adjustor the same day we were picking a jury, so she could “read him his rights.”  We dragged an attorney for the insurer in with us – from a third city – on the appointed day. And now the settlement judge who had been so insistent we bring him now told us that she was too busy to talk to him, and that we should just go and pick the jury in front of another judge.  Having called her bluff and having gone to the expense and inconvenience of turning up with an insurance company representative, we were finally allowed to go to trial without further hindrance.  I am convinced she had never intended to have any real dealings with the insurance lawyer; she just wanted to make avoidance of settlement that much more onerous for us: a final shakedown.

 

          The story had a happy ending, if you can call anything involving trying a frivolous case a happy ending.  Unlike the settlement judge, the trial judge actually cared about the law and our defenses, and gave the jury instructions which apportioned appropriate weight to those defenses.  They were out less than half an hour before returning a verdict on all counts for my client.  But my client had had to undergo the cost of trial, depositions in three states, and considerable business disruption to get there.  And I’m sure the settlement judge felt we had not played our proper role by insisting on going to trial; in her mind, it was our responsibility to keep turning up for sessions of mediation hell until we caved.

 

          Indeed, that is all too frequently the precise judicial mindset.  In the Harvard Law Review piece I mentioned, Professor Resnik writes of a federal judge telling a group of lawyers in Los Angeles that every trial was a failure by the lawyers involved.  Every trial?  What planet did this judge inhabit?  Such a mindset is an absolute license for bad judging.  Mediation, which started out as an aid to settlement and thus for the benefit of all parties, has become a club to prevent the party with the stronger case from obtaining any resolution other than settlement on unreasonable terms.  The party is denied summary judgment and, to the extent possible, trial.  The party is clapped into conference rooms with judges and surrogates whose priority is settlement, not justice, and who have the power to apply severe pressure.  Things like what happened to me and my client in that nameless courthouse are commonplace.

 

          The federal courts, by contrast, are armed with the Celotex trilogy, three 1986 cases in which the Supreme Court assured trial judges summary judgment was truly ok.  The federal bench, filled with lifetime-appointed judges who have no incentive like their state brethren and sisters to please the masses, took to Celotex with alacrity as a fine way of clearing their decks. Here the trick has been not to acknowledge the existence of disputes of fact even when they are clear cut. 

 

          I had a typical case a few years ago with a federal judge who shall also be nameless.  I was representing a government employee trying to overturn a demotion to which my client had “voluntarily” agreed.  According to my client’s complaint and affidavit, this “voluntary” agreement came after his boss (don’t ask how this came about) had credibly threatened to put down a dog close to the employee’s heart unless the client signed a consent to be demoted.  Our theory was that sign-or-I-shoot-the-dog was duress invalidating the demotion.  The government pointed out in its summary judgment motion that the document agreeing to the demotion recited on its face that the demotion was voluntary.  The court issued a ruling on the earliest possible date, specifically finding that (no matter what the actual duress involved in procuring the signature), my client’s signing off on a document which recited that there was no duress involved in procuring the signature precluded him now from contending that there had been duress involved in procuring the signature.  To me it was obvious that an issue waiver procured by duress is of no greater effect than a demotion procured by duress, and that an affidavit declaring there was duress should have raised a triable issue of fact as to whether the duress occurred regarding the waiver.  Right?  Well, not before this judge, anyway.  Celotex had reinforced his bad judicial instincts. And it happens a lot.  Ask any plaintiffs’ employment lawyer in this state how hard it is to create a triable issue of fact in a federal court; he or she will tell you. 

 

          The surprising thing, then, about these apparently contrary courses of action, wilful denial of merited summary judgment on the one hand and trigger-happy unmerited granting of summary judgment on the other, is that they both end up clearing the dockets without trial, one as a prelude to the extortion of settlement, the other by nonsuit.  Either way, the benefits of trial are slipping away.

 

          The benefits of trials are many, including the creation of a public record of the facts and an authoritative determination of what occurred and its legal significance.  Moreover, the very ritual of trial, worked out over hundreds of years, is commonly recognized as more authoritative for making that determination than arbitrations or administrative hearings, mediations or settlements.  Novelist and Fordham Law Professor Thane Rosenbaum (The Myth of Moral Justice, 2004) maintains that the function of trial is also humane: that it enables participants to tell their stories and be heard, which is both therapeutic and cathartic.  While I part company with Rosenbaum on his apparent objective of making each trial a mini-truth and reconciliation commission (a la South Africa), there is no denying that trials can sometimes be therapeutic and cathartic for all participants, irrespective of the outcome.

 

          Whether or not one agrees on this or that benefit of trial, trial remains, for many good reasons, the default method for the resolution of legal disputes in our country.  It is a right to which litigants are constitutionally entitled if there are not grounds for dismissal or summary judgment.  Yes, trial is expensive, though less so if the parties can dispense with unnecessary preliminaries like the infliction of unwanted mediation.  Yes, the system cannot possibly tolerate trial in the majority of cases.  But trials are disappearing, not swamping us.  Trying to put trials further out of reach is solving a nonexistent problem.  And rather than trying to prevent trials through devices like mediation hell and trigger-happy summary judgment, our judges should be trying to make trials happen quickly and smoothly.  If litigation were a pinball game, judges using devices like these would light up the “TILT” sign.

 

          Judges should not be about making trials not happen.  Maybe the one who made us drag in the insurance adjustor for a fourth round of mediation and the one who granted summary judgment because (if we were right on our facts) duress had extorted a confirmation of the falsehood that there was no duress are worse than most.  But the entire judicial branch has its fingerprints on the efforts to kill trials, an effort of whose success these two instances were tiny examples, and this effort is wrong.

          Let me end with a couple of positive notes. 

 

          I said in the first article of this series that, so far as I could see, the only potential effective brake on bad judicial quality not rising to the level of senility or outrageous misbehavior was the press – though presently the press has failed to shoulder this burden.  One reader called to my attention the reported practice of a Circuit Court judge in Baltimore County who periodically sends out questionnaires to parties who appear before him, in order to get their evaluations.  This is a truly great idea – although one can surmise that a judge who does such a thing of his own initiative is probably less in need of constructive criticism than most.  But what if we were to standardize the process and mandate it for all judges, and post the results on the Internet, say on the official Maryland Judiciary website?  (While of course giving the respondents the same kind of defamation immunity accorded statements made in court.)  This would be a low-cost way of pinpointing problems, a way that disciplinary authorities and the press could take and run with.  Not only that, but it would be a marvelous way to establish who the good ones are. 

 

          And yes, there certainly are lots and lots of good ones.  In the last couple of months, I had the great pleasure of trying two week-long cases before outstanding trial judges (one of whom presided over the very case in which the settlement judge had first made us bring in the insurance lawyer).  Each trial judge applied the law, each was courteous to all parties, each kept control of the proceedings in the courtroom, each was open-minded until he made up his mind, and then each was decisive.  It was strange to be in the midst of writing essays critical of bad judges at the very moment I was observing the judicial craft so well practiced.  Strange, but useful in forcing me to maintain perspective.

 

          Using that perspective, let me conclude in this fashion: We have a problem with bad judging not because judges are on average so bad.  On the other hand, it is not simply a matter of rare bad apples.  It is a frequent problem, and one exacerbated by the absence of controls and the presence of trends like the disappearing of trials.

 

          Failing any miraculous fixes, we in the legal profession would do well to be as honest as we can with ourselves and each other.  As I have said before, we know who the bad ones are.  How we handle them is of course conditioned by our sense of self-preservation and our instinctual courtesy, both of which may prevent us at key moments from treating things as they are.  But it would be best if we did not buy tickets to the bad ones’ fundraisers, if we did not cravenly shake their hands approvingly at professional functions, and if, at their investitures as they moved up the judicial pecking order or retire, we did not laud them publicly for virtues we know perfectly well they do not possess.  We must accord them respect in the courtroom, but we do not need to treat them elsewhere as if we didn’t observe how they behave towards us there.  They thrive on our profession’s cowardice and silence, which, when it comes to our frequent rituals of public respect, lumps the truly good judges in with the bad.  If we want to make good judging something all judges aspire to, the good ones should receive loftier recognition than the bad.  That will help keep the good ones good and help give the bad ones some aspirations.

 

          No matter what we do, of course, we know it won’t solve everything.  Nothing ever solves everything.  We haven’t come very far since Luke 18:2, and probably won’t get much further in a hurry.  I quote again:  “There was a judge in a certain town who neither feared God nor respected any human being. . .”

 

[1] This article was published, but is now secured behind a pay wall.  Click on the link if you wish to buy it.

 

Copyright (c) Jack L. B. Gohn

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

Bad Judg(e)ment Page   Previous Bad Judg(e)ment Column

Peccant Judges

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

Bad Judg(e)ment Page   Previous Bad Judg(e)ment Column   Next Bad Judg(e)ment Column

 

Bad Judg(e)ment: A Three-Part Series

Part Two: Peccant Judges

Published in the Maryland Daily Record August 27, 2004

 

          As I wrote last time, most judicial vices are well within the bounds of what our system permits.  There is no serious enforcement mechanism to prevent vices like sloth, arrogance or rudeness on the bench.  But sometimes a judge manages to stick a toe over the line.  He (I won’t add my customary qualifier “or she” because the ones I’m thinking of always seem to be men) get involved in vices that actually break the law.  He sins.  He gets involved in the same illegal things as do the civilians over whom he sits in judgement.  I’m not talking about major transgressions like murder or rape, but about misdemeanors like consorting with prostitutes in chambers or other petty crime.

 

          A poster boy for this kind of low-level peccant (from the Latin peccare, to sin) behavior was the Hon. Thomas S. Gilbert, the Traverse City, Michigan judge who attended an October 2002 Rolling Stones concert in Detroit, had someone hand him a joint, took a puff, passed it along – and found himself the butt of Jay Leno jokes and subject of judicial discipline. 

 

          Talking his case over with colleagues, I have been told that how you react to his tale partly depends on your thinking about the marijuana laws.  So I guess I’d better get that issue out of the way first. 

 

          To me, these laws seem somewhat absurd.  Here is a substance that, in the quantities typically consumed, has far less provably injurious or addictive effect than either alcohol or tobacco, and also sports some tolerably well-established medicinal purposes.  It seems to have been made and kept illegal not because of its own inherent dangerousness but because there are statistical associations between marijuana use and the later use of other, more worrisome substances — and also, I strongly suspect, because of social prejudices against the kinds of people who use marijuana.  It is a substance that most people – and I suspect most judges – have tried somewhere along the line. 

 

          And as to association with later drug use, some of the other things associated with later hard drug use include such legal substances as alcohol and tobacco, and such experiences (legal to undergo if not necessarily to inflict) as parental conflicts or separation, childhood sexual abuse, conduct disorder, major depression and social anxiety.  In other words, smoking pot seems to be part of the common suite of adolescent vulnerability and risky behavior, every part of which is associated with every other part.  But we’re singling out pot alone and making it illegal.  So my outlook, in considering Judge Gilbert, is: I won’t smoke the stuff, but this is not a law I can ever respect.

 

          Back to Judge Gilbert, then.  He admitted that he sometimes used pot (and one has to suspect may have toked up more often than he confessed to), and also admitted that he had passed sentence on potheads who had come before him.  He drew a 90-day suspension.  Reflecting on the judge’s plight, I realized my sympathies were all over the map.  His situation was like a perfect moot court problem, one in which there were a hundred right answers, and none.

 

          The judge used an illegal intoxicant.  It may not be malum in se but it certainly is a mala prohibita.  As a lawyer, I can appreciate the importance of usually following rules I don’t agree with, as an abstract matter. Still, there are limits, as I said in an earlier column, and the marijuana laws may be just such a limit, where an individual may morally regard himself as entitled to flout the laws because the laws lack legitimacy.  They are exercises in the majority telling an unconsenting minority how the minority should live its life.  (The majority foregoes nothing because it does not want to use marijuana; it only wants to prevent the minority from doing so, even when the bad effects upon the majority if the minority does not comply are largely remote and theoretical.)  Despite the technical sufficiency of the legislative process attending their passage, marijuana laws are not completely legitimate from a moral standpoint.  (“Debatable Laws,” March 26, 2004.)

 

          Well, but isn’t a judge a special case?  He or she is supposed to be a moral exemplar, because a non-exemplary judge brings the law into disrespect.  So shouldn’t all judges comply with all laws, no matter how questionable their legitimacy?

 

          Of course, how many times have we heard – or for that matter said to ourselves, through gritted teeth, that one respects the office, not the individual?  So if the individual falls short of perfect observance of the law, should that make us less respectful of the office?  Or of the laws he or she enforces?

 

          But then again, in the case of laws that really deserve no respect, that make things mala prohibita for no good reason, wouldn’t it arguably be the case that we would have more respect, not less, for the judge whose behavior flouted such laws?  In other words, might we not respect the office more if the officeholder concurred with our lack of respect for those laws (perhaps including the prohibition of marijuana) that serve no good purpose?  Laws whose real effect is to fill our prisons with people who have really done nothing wrong, and impose career-destroying stigma on many more?

 

          Well, but take Judge Gilbert, the non-hypothetical judge who in personal life flouted bad and mean-spirited laws, but on the bench enforced them, fined or sent others to prison for behavior morally indistinguishable from his own.  Was he a hypocrite?  Or just a jurist who realized that it was his job to enforce the laws in the cases that came before him, regardless of how he personally felt?  Could it not be said that he was just showing respect for his office and for the laws?

 

          In the alternative, suppose the folks who passed the law were right?  Suppose that, for reasons not now readily apparent, marijuana were truly the scourge and the pestilence the law treats it as being?  Supposing that Judge Gilbert, by taking a single drag of a dobie, were doing something incalculably terrible?  And then suppose that, again, he went on the bench and enforced laws in the cases before against the same kind of behavior of which he himself were guilty?  Would he not then be, in a very real sense, admirable behavior, like to the behavior of the Whiskey Priest in Graham Greene’s The Power and the Glory?  (The Priest is a drunk, a lecher, not even a very faithful man, and yet he becomes a martyr.)  Knowing himself to be a flawed vessel, such a judge would nonetheless be dispensing justice as best he could, and protecting society, maybe not in the purest way, but nonetheless protecting society.

 

          Certainly the matter would take on a very different complexion, though, depending on the laws in question.  We would probably want even a sexually abusive judge to uphold the sexual harassment laws, no matter how narrow-minded he or she believed them to be.  (Interestingly, while the Gilbert case was pending, one of his Upper Peninsula brethren had to resign after being exposed as a groper.  This tells us that the disciplinary authorities recognize – and I’d concur on this – that groping is more serious than pot.)  But probably we would want a judge in the pre-Civil Rights South to subvert Jim Crow, however staunch a segregationist he or she might be in private life.

 

          The instinctive response to all this confusion is to demand consistency of our judges: to insist that they live up to all the laws all the time, and believe in all the laws all the time.  And in some alternate universe, perhaps that would even be possible.  In this universe, however, we only have flawed human beings like ourselves to staff the judiciary.  And in this universe, no thinking person can admire all the laws.  And we do want our judges to think.

 

          In this universe, let it be noted, Judge Gilbert did not depict himself to be a martyr for marijuana dissent.  Far from maintaining that there was nothing wrong with his close encounter with a joint, he repudiated it and blamed it largely on alcohol abuse problems (for which, like so many modern sinners, he then sought treatment).  Like Galileo, he recanted what he must have believed to be the truth – in this instance about the inanity of one of the laws he enforced.  In other words, his personal deviation from the norms he enforced was proclaimed a matter of weakness, not of principle.  His apology went to the upholders of the law, not the druggies.  And he was almost certainly not displaying much candor thereby.  But a judge who lies to save his hide may survive to follow his mission to administer justice another day, while a martyr may be removed from the bench, and serve no one.

 

          Not Judge Gilbert, though.  He soon came to see that he had lost what the Chinese call “the mandate of heaven.”  Bowing to the action of his local Bar Association in ejecting him, and of the electorate in being ready to vote for some very credible challengers for his seat, he declined to run again.  That very public puff had finished his judicial career.  Interestingly, no commentator I read (and I looked at the local Grand Rapids papers as well as the national press) indicated whether Judge Gilbert was in other respects a good judge or an awful one.  The marijuana use – quite tellingly — tells us absolutely nothing about what kind of judge he was, and only the sketchiest amount about the kind of human being he was.  It is possible that Grand Rapids lost a really good judge in this imbroglio.

 

          In retrospect, was he tragic or contemptible or just comical?  Not easy to say.  Peccant judges have that effect on us; unless we demand inhuman judicial perfection on the one hand or endorse total judicial anarchy on the other, they force us to think in uncomfortable shades of gray.  They pose big headachey problems.  Which I guess is why in general we want our judges to be squeaky clean.  Not because this is an assurance of great judging, but because peccant judges raise such unsettling issues, and we have enough on our plate.

 

          I certainly do, which is why I would not have voted to reelect him.  I might have had some misgivings, but in the end I would have wanted him to leave.  But in voting against him, I would have felt a nagging sense that this was all wrong.  As I say, peccant judges have that effect.

 

Copyright (c) Jack L. B. Gohn

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

Bad Judg(e)ment Page   Previous Bad Judg(e)ment Column   Next Bad Judg(e)ment Column

Belling the Cat

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

Bad Judg(e)ment Page  Next Bad Judg(e)ment Column

 

Bad Judg(e)ment: A Three-Part Series

 

Part One: Belling the Cat

 

          “There was a judge in a certain town who neither feared God nor respected any human being.”  Luke 18:2 stands as evidence that unprincipled and disrespectful judges are not a novelty in human experience.  They were a problem in Biblical times and, human nature not having changed much in the last two millennia, they are a problem now, a problem we lawyers have to deal with all the time. 

 

          We who stand between the bar and the bench all know who they are: the abusive ones, the indecisive ones, the ones who come to the bench without having read the briefs, the ones who cut the day’s work short in honor of the cocktail hour or tee time, the ones who are so eager to be liked they waste everyone’s time with war stories in chambers, the ones who grow frightened or indignant when properly asked to make new law, the sexist dinosaurs, the inconsistent and mercurial ones, the moody ones, the ones who long ago gave up caring about justice and only take pride now in clearing their dockets, the ones who endlessly delay writing important opinions, the ones who hand so much of their jobs to their clerks there seems to be nothing left over.  Most of these shortcomings — and a myriad like them — are not subject to any effective check under our current disciplinary system.

 

          Locally, that discipline is put principally in the hands of the Judicial Disabilities Commission, an agency whose very name suggests its principal focus.  The organic statute of the Judicial Disabilities Commission tells us – and the record confirms – that the Commission exists to look primarily at one category of the hundred and one things that a judge can do to render a courtroom dysfunctional.  The Commission’s task is to address “disability which is or is likely to become permanent and which seriously interferes with the performance of the judge’s duties.”  True, the Commission may also address “misconduct while in office, or of persistent failure to perform the duties of the office, or of conduct prejudicial to the proper administration of justice.”  But clearly this refers to the most serious kinds of malfeasance.  It leaves entirely outside the scope of discipline most of the things that the judges “who neither fear God nor respect any human being” are apt to do.  (And even given its limited scope, the Judicial Disabilities Commission is notable for almost never removing judges publicly, although it is reliably rumored that some have been privately given the choice of stepping down.)

 

          And most of the time, so-called alter ego programs, where a member of the bar close to the judge acts as anonymous filter for relaying complaints, don’t seem to work, although the off-the-record and informal nature of the process makes neither statistics nor certainty possible.  First of all, complainants cannot really be anonymous.  It is the rare grievance so un-fact-dependent that the lawyer involved can seriously hope not to be identified.  And the hunch shared by many lawyers is that the good judges pay attention and the bad judges, the ones who need major overhauls in their approach, brush it off. 

 

          The bottom line is, for any judicial vice short of corruption or dementia, there is no serious regulation.

 

          Publicity would help.  As I have written here before, truth is powerful.  Public indignation could accomplish a lot.  But, except for those occasions where our clients individually  run into one of the bad ones, they don’t know.  No, when it comes to identifying the bad judges, the only group presently endowed with enough institutional memory to connect the dots is the bar. 

 

          But we lawyers are not and cannot be big on blowing the whistle.  There are principled and pragmatic reasons.  As a matter of principle, in most contexts, we lawyers are professionally required to treat our judges, good, bad, and horrible alike, with respect.  They embody the rule of law, even when it’s but a poor impersonation.  Only in the most extreme situations are we going to feel comfortable complaining publicly about them.  And on the pragmatic front, insubordination may be dangerous to our professional health.  Bad judges are often vengeful judges.  Sometimes complaining publicly can get you sanctioned.  There have been a string of cases nationwide over the past few years of lawyers disciplined for attacking the integrity of the judges before whom they appear.

 

          We are in the position of the mice in the fable; they know that if someone puts a bell on the cat, every mouse will know when the cat is nearby, and more of them will make it back alive to their holes and families at night.  The problem is, belling the cat is a suicide mission no mouse would be foolhardy enough to undertake. 

 

          These bad judges need to be belled.  But it’s exceedingly tough for us lawyers to do it.  But who, if not we, will bell the cat?

 

          My own nominee for cat-beller would be the press, often, and for reasons just like this, called the fourth branch of government.  Right now, however, the press is doing a lousy job alerting taxpayers to lapses of judicial quality.  Seldom do news stories about courtroom matters comment explicitly on the judge’s professionalism or lack thereof.

 

          To be fair, journalists usually start with the opposite problem from the one we lawyers encounter.  Even today, where most media are owned by bottom-line obsessed conglomerates to whom dissemination of important information is merely an incidental concern, many journalists remain committed to their mission to inform the public without fear or favor.  But they — even those of them who often write about legal matters — may not know who the bad ones are.  Discerning even judicial rudeness may require some knowledge of the more sophisticated niceties.  And any lawyer who has seen a case he or she is personally involved in covered by the press understands how problematical it can be to get accurate reporting.  It sure helps if the reporter has legal training.  And there are lawyer-journalists out there. And yet judicial quality problems seldom see print, and even less often see the small screen where most of the public news consumption happens.

                                                                            

          Another undoubted impediment is the way media outlets, the employers of reporters, even the legally sophisticated ones, approach the coverage of facts.  We lawyers may know full well that a judge’s possession of judicial temperament or lack thereof is an objective fact like the color of her eyes.  But it is usually not as easy to quantify or measure.  Unfortunately there is no recognized empirical test for judicial incompetence, arrogance, meanness, or laziness.  You will not find Black Robe Fever in the DSM.  And while journalists are not totally leery of reporting on unquantifiable facts, they tend to prefer at least facts like events which can be confirmed or disconfirmed by a fact-checker.  Short of a Judicial Disabilities Commission hearing, however, there is seldom an objective event establishing the absence of judicial temperament to report upon.  Reporters do not usually get to write on “soft” subject like this, being saved for the more “objective” material.

 

          And unlike similar quality problems in other branches of government, the temperament and approach of individual judges is not often discussed on the op-ed pages, either.  The trial bar isn’t talking, and, as I have said, really can’t talk publicly.  Of course the world is full of disappointed litigants who might like to air harsh words for the judge who didn’t give them what they wanted (or delighted litigants to whom the judge is “a Daniel come to judgment”).  But their objectivity is so suspect they seldom get handed the megaphone.  The not-infrequent critiquing of appellate rulings by law professors and those concerned with social policy is an entirely separate enterprise.  (Well, usually.  One recent and wonderful exception was a dignified, and devastating, commentary in the Spring 2004 issue of Administrative Law Review, an article by Prof. Richard Pierce entitled “Judge Lamberth’s Reign of Terror at the Department of Interior.”  Download it; it is not to be missed.) We are talking here about justice at the retail level, where judicial temperament probably counts the most.  And the fact is, except under the most unusual circumstances, no one is writing about this subject, not the lawyers, not the parties, not the professoriat nor the punditry.  

                                                                                                       

          As a result, trial judges enjoy a practical impunity from public comment few other public officials can claim.  It is impossible to imagine a mayor or a legislator or an agency head with respect to whom almost every potential critic is professionally muzzled.  This is not healthy.  There are big things and little things journalists can do to improve the situation. 

 

          One small thing which might pay big dividends is just a heightening of attention to this issue in the course of existing coverage.  Journalists cover trials; let them finally begin to report how the judges preside over them as a part of the coverage.  (And no, taking potshots once a decade at a the judge in a sensationalized trial as many reporters did at Lance Ito in O.J. Simpson’s trial does not count.  Also, few points should be awarded for gotchas: for instance sexist comments that fall with a clang on the courtroom floor as in the Peacock case of a few years back.  They may deserve scrutiny and rebuke, but they’re too easy.  The bigger problems usually lie in the subtler details.)  Even if reporters do not come out and evaluate the good and the bad as such, reporting on the incidents of judicial outrageousness would help, e.g. what the judge said that may not have been substantive but affected the tone of the trial.  There was a nice instance of this in the a series in this paper in the last couple of years on bail review, where the different courtroom demeanors of two District Court judges were compared.  We need more coverage like this.

 

          But the larger issue cannot be ducked: judicial approach, demeanor and competence are proper subjects for journalism unto themselves.  They should be covered even when the cases in which the journalists observe them at work are not covered in their own right.  Journalists should be going to the courtrooms to check up on how our judges are doing their jobs.  They should be asking around for scuttlebutt.  They should be reading the rulings in the little cases no one cares about to ask questions like: Is a judge (or a whole court) summary judgment-happy?  Do employers always win discrimination cases before her?  Does the prosecution?  Or is the judge so bent on serving as a tribune of the little people that a big economic interest cannot get a fair shake before him? 

 

          And I know a great source for this great ongoing story, now underreported for at least two millennia.  As I said before, we lawyers all know who they are.  If only the press will ask us.  We’ll tell them.  Maybe not for attribution, but we’ll tell them.

 

Copyright (c) Jack L. B. Gohn

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

Bad Judg(e)ment Page  Next Bad Judg(e)ment Column

Normandy, Four Kinds of Soldiers, and the Draft: Some Thoughts

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column 

Army Rangers on deployment to the Normandy celebrations

Army Rangers on deployment to the Normandy celebrations

Veterans from the British beaches
Veterans from the British beaches

 

Normandy, Four Kinds of Soldiers, and the Draft: Some Thoughts

 

          Earlier this month, I was privileged to be present in Normandy for the celebrations of the 60th anniversary of D-Day.  It was unforgettable for all kinds of reasons, including the sheer profusion of soldiers. 

 

          There were, of course, the actual D-Day veterans being honored, old, proud, mostly infirm.  Talking with the vets, you were struck by their ordinariness, how they really had been what historian Stephen Ambrose called Citizen Soldiers.  By and large they were ordinary, decent people who had been torn out of their civilian lives to join an armed force that was being pumped up to several times its pre-war size and do something that there was nearly universal consensus had to be done.

 

          The veterans one expected to see.  What to me at least was a shock was a swarm of faux-GIs infesting every corner of the Channel coast.  Dressed up as 1944 aviators, infantrymen, WACs and nurses, they went careening around the Cotentin in vintage Jeeps or troop transports, waving at passers-by like the liberators in the old newsreels.  It was utterly bizarre, especially when one discovered how few of them spoke English.  At the June 6 ceremony at the Coleville Cemetery waiting for President Bush to speak, I was sitting in front of four of these impersonators all decked out as paratroopers, conversing loudly among themselves in German.  At sunset that day, on the beach below the Cemetery, I had to explain to a group of Italians wearing the insignia of the 29th Infantry what the blue and grey colors on their shoulder patches actually meant.  Looking like a U.S. GI (virtually no Tommies or Maquis, let alone Wehrmacht, were in evidence) is apparently incredibly chic, even among the children and grandchildren of their former Axis adversaries.  It may be a bit of an unwanted compliment to the veterans, but the dress-up stands as some kind of indicator how universally the Greatest Generation guys and gals are admired.

 

          Actually running all of the bigger American sector ceremonies were today’s soldiers, real live current members of our volunteer military.  They too were everywhere, and there were numerous chances to talk to them off duty.  Of course the two best-known members of that volunteer military at the moment are Lynndie English and Charles Graner, late of the Abu Ghraib Prison torture and humiliation detail – about as different as could be from the admirable Citizen Soldiers of old.  The burning initial question on my mind was of course whether the security personnel and honor guards in Normandy were cut from anything like the same cloth as the Abu Ghraib torturers.  My sense, after observing the former in a number of settings over a number of days, is that the answer is no.  The soldiers I talked to, including soldiers who had served in Iraq, were frankly appalled and dismayed.  I did not hear a single word uttered in defense or mitigation of the abuses.  Perhaps more important, I was struck what you might call the moral spit-and-polish of these warriors.  It may be that the ugly spirit of Abu Ghraib chimes nicely with the ugliness rife at the White House and the CIA, but it is not typical of the volunteer Pentagon.  I also spent some time talking with a Special Forces colonel, whose observable easy rapport with his men, distinguished career, and thoughtful perspective on matters both military and not convinced me that we still have the makings of Eisenhowers.

 

          There was another kind of soldier abiding in Normandy too, represented by the thousands of white marble crosses in the Coleville cemetery.  The dead are ever present in Coleville.  They relentlessly refute any notion that war is some great glorious exercise without cost.  The sheer staggering weight of the sacrifices represented by those crosses sets everything in perspective.  These are sons, husbands, fathers, who would never come home, broken hearts, wasted education and training, futures that would never be – raw, jagged sacrifice.

 

          The dead under those crosses were part of an army that in some respects we should never expect to see again.  130,000 people were landed as part of the immediate D-Day invasion.  There were a million and a half Americans in England on D-Day-1.  There is unlikely ever to be an American armed force as vast again.  A military colossus of that size is technologically outmoded.  Increasingly, what land war demands is small cadres of soldiers skilled at operating weapons systems, and/or light or special forces to combat guerillas, as opposed to huge masses of infantry for fighting each other in fixed formations or storming fortifications.  But as the headlines proclaim each day, and the hallways of the Veterans’ Hospitals attest, the high tech warriors and the guerilla-killers get killed and injured just like their forbears.

 

          Perhaps the biggest distinction in the end is this: The men laid to rest at Coleville were largely an army of draftees.  The draft had thoroughly mobilized every segment of American society in that War.  The necessary sacrifice was largely shared among rich and poor, largely courtesy of the draft.

 

          In World War II, unlike Vietnam, World War I or the Civil War, there was little political or legal debate about the draft.  Considering the sheer scope of the enterprise in 1944, it is interesting to speculate on this silence, the dog that didn’t bark, as Sherlock Holmes would have described it.  After all, the draft is definitely a form of servitude which seems entirely antithetical to the life, liberty and pursuit of happiness extolled in the Declaration of Independence and protected (at least as to life and liberty) by the Fifth and Fourteenth Amendments.  In the case of World War II the lack of debate seems easy to explain.  The soldiers of D-Day believed in their war and believed in their leaders.  In Roosevelt, they had a President who had spared them until it was indubitably necessary to do otherwise.  In Eisenhower and Marshall, they had generals with what Tom Wolfe would later call the Right Stuff.  They knew their sacrifice was for a good cause and intelligently administered.

                                                                                                         

          Unfortunately, there is no way to guarantee this unique constellation of a perfectly legitimate war and near-perfect leaders.  Far from it, in fact.  One thing Roosevelt did which no later President has ever done to assure legitimacy was to obtain from Congress an actual declaration of war.  Later Presidents have fiercely guarded and expanded their prerogative to commit U.S. troops to action with at best limited Congressional assent, often obtained with lies and half-truths (the Gulf of Tonkin Resolution and the recent authorization of our Iraq adventures being sterling examples).  American parents can be pardoned for feeling distrustful about committing their precious sons and daughters to wars justified by lies and instigated by liars.  As I have said, my sense of the officers I met in Normandy is we still have military leaders of Eisenhower’s caliber.  But it is an open secret that we lack Presidents like Roosevelt who are willing to put their warmaking to the true constitutional test of open war declarations, or whose honesty justifies the trust that war requires.

 

          Today’s Army is different from the Army of D-Day, in part because our volunteers are a self-selected lot who have chosen arms as their lifetime or at least temporary career.  Of late, there have been calls to erase this part of the distinction and reinstate the draft.  A program is already under way to re-staff the Selective Service System, Presidential advisor Karl Rove has been sending feelers out to Republican lawmakers on the draft, and legislators are talking about it.  And in effect we have already instituted a limited de facto draft by deploying National Guards and Reservists and denying them the ability to demobilize or resign. 

 

          Two predominant reasons are cited for returning to the draft.  Some, like Congressman Charles Rangel of New York, want to democratize the sacrifice World War II-style.  In Afghanistan and Iraq, the volunteers fighting and dying there are reportedly predominantly from lower-income locales and social groups.  Rangel objects to the sacrifice being concentrated in this way.  And he also no doubt feels that if the wars to which rich draftees were sent were of questionable legitimacy, these rich draftees would use their connections to challenge wrongheaded warmaking.  In other words, if sent into military servitude, these soldiers would exercise their social influence (an influence not possessed by today’s volunteer soldiers) to keep the servitude from being wasted.  So runs the theory.  Others, many of them military insiders speaking mostly off the record, feel that our armed forces are simply too small for the missions on which they are being sent nowadays, and that a draft would fill the ranks of new divisions and air wings and carrier groups in a way that mere volunteers could not be trusted to do.

 

          Those who fear that the marketplace of volunteers is drying up because of the unpopularity of our wars have some anecdotal evidence to support them.  The New York Times recently reported that military recruiters, used to filling their quotas, are suddenly finding they have fewer well-educated recruits, or even fewer recruits, period.  This should be little surprise; in a free market, incentives and disincentives (like distrust of wars and leaders, and unwillingness to die for that which one distrusts) will have an effect.

 

          It is likely, therefore, that the call to return to the draft will grow louder in the coming months.  At this writing, we are stationing centurions in Afghanistan and Iraq, and have soldiers and sailors and airmen posted all over the world.  And America’s worldwide war with elements of Islam, highly lethal and probably unwinnable, will continue to claim the lives or health of large numbers of our military so long as we continue with it.  The demand for recruits looks to be constant if not increasing, at the same time as the attractiveness of being recruited declines.  The draft will inevitably look increasingly attractive to the warmakers.

 

          Calls to revive the draft should be resisted.  There are two overwhelming reasons.

 

          First, as noted above, the constitutional safeguard of declarations of war has been bypassed so often and in so many different ways that it is essentially a hollow guarantee.  If we also eliminate the market forces that limit the size of our military, this disables one of the few significant checks on the ability of our leaders to wage undeclared and unpopular war.

 

          Second, conscription is not slavery but it tends in that direction; as such, it is a moral wrong.  The decision whether to submit to military discipline is too important to allow another person or a government to take in one’s stead.  Whether to subject oneself to mortal jeopardy is also a matter of personal right that simply outweighs any claim that any nation can possibly have.  A nation which has nurtured and protected one can have claims, for instance to taxes, but not to that.  And, paramount to all these considerations, the decision whether to take part in a killing enterprise like a war should be the most personal of all.

 

          Of course, this begs the question whether, if all the young men who gave their lives in June 1944 in Normandy had been free not to participate, we could ever have had such an indispensable invasion.  Our nation’s survival at times has depended on people submitting to military discipline, exposing themselves to mortal jeopardy, and being willing to kill for their country.  That is a tall order, taller if people are free not to opt in.  But not, I believe, impossibly tall.  The Revolution was fought mainly without conscripts; Baron von Steuben remarked at the time that in Europe you tell a soldier to do thus, and he does it, but that in America it is necessary also to tell him why he does it.  Eisenhower quoted this comment nearly two centuries later in his memoir of World War II.  Von Steuben and Eisenhower therefore suggest that over our whole history, it is a constant that if you do tell America’s would-be warriors convincingly what you need them to fight for, they will present themselves for service.

 

          The volunteer army is proof of this.  It has worked pretty well to date, particularly in view of the diminished need for bodies in a modern military.  If at this point voluntarism as a means of replenishing even the scaled-back ranks required today seems to be losing effectiveness, the fault probably lies not in the hearts and minds of the potential volunteers, but in the hearts and minds of their leaders, who cannot or will not present a convincing case for enlisting to fight today’s wars.

 

          With good leadership, with Eisenhowers and Roosevelts, young men and women will predictably enlist in acceptable numbers.  With bad leadership, the discipline of the enlistment market will act as a check.  It would be both foolhardy and morally wrong to remove that check.  Vietnams happen when Presidents and generals can rely upon conscripts to fight bad wars.  Normandies happen when Presidents and generals do the right thing, and when Presidents and generals do the right thing, the volunteers will be there.

 

Copyright (c) Jack L. B. Gohn

 

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column

The Intelligent Design Debate: Dogmatists Keep Out

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column

 

Note: When I learned more about the subject of this piece, courtesy of the Kitzmiller case, I changed my mind about most of my conclusions here.  I revisited the subject in Intelligent Design Revisited (October 29, 2007).  Don’t read this piece without reading the followup.

The Intelligent Design Debate: Dogmatists Keep Out

 

          You’re a scientist, given the unique opportunity to visit a faraway planet.  You are told that eons ago, someone visited the planet and left a tribe of chimpanzees and an infinite supply of typewriters and papers, and that no one else has been there since.  When you get there, you find that all the chimps over the generations have been playing with the typewriters.  Papers filled with random letters and punctuation marks are strewn everywhere.  There are occasional words, but they appear to be accidents.  You are soon satisfied that none of the chimps has become capable of speech or abstract thinking.  But then you find one typewriter next to which sits a neat stack of papers.  You see a chimp at the keyboard, just typing the words “THE END” and taking the sheet out of the platen.  The typist proves no more intelligent or literate than any of his peers.  But the papers in the stack, together with the last page that you have just taken from the typist, prove to be a novel, alive with characterization, social observation, wit, plot twists and suspense.  The book is a coordinated work of art in which every piece works with every other piece.

 

          How could this be?  As a good scientist, you profess allegiance to the principle of parsimony: that is, you go for the explanation that requires the fewest assumptions.  But applying that principle here is difficult.  Which is more parsimonious: the notion that sheer random activity happened to produce a literary masterpiece, or that some intelligent force of which there is no other evidence was guiding the typist’s actions?  As applied here, the first assumption requires the conclusion that random activity can, with something far less than an infinite number of tries, duplicate the effects of high and sustained intelligence.  The other requires the conclusion that such an intelligence is at work from outside the typist chimp, even though the mechanism by which such an intelligence could have imposed itself on the chimp is unknown and perhaps unknowable.

 

          That is the dilemma which has given rise to Intelligent Design theory.  In observing the differentiation of species on Planet Earth, scientists confront what many believe are coordinated changes as one species evolves into another: several things happening apparently at once and in tandem.  Evolution may well require simultaneous changes in several parts of a creature’s body.  Such coordinated changes may be akin to the complex interrelated choices that lead to the plotting and phrasing of a novel.  What then is more likely: that somehow the random effect of cosmic rays or similar influences on one specimen’s DNA simultaneously made that specimen and presumably a mate each have all the required changes, or that there was a coordinating intelligence at work?  Traditional Natural Selection theorists hold for the former explanation, Intelligent Design theorists for the latter.

 

          This would seem like grist for a traditional scientific dispute.  But it has become far more.

          From time to time since the dawn of modern Western science in the Renaissance, men and women attempting to explain nature have been told they cannot research or teach along certain lines because it clashes with governing dogma.  Galileo’s work on heavenly bodies was badly interfered with by the Inquisition because it challenged a Ptolemaic cosmology which had been embraced by the Church.  John Scopes’ right to teach Tennessee children about Darwinian evolutionary theory was challenged in the famous Monkey Trial because it was inconsistent with the (themselves inconsistent) stories of creation in the Book of Genesis.  And non-Lysenkists were purged, sent to the Gulag, and shot in Stalin’s Russia because their views of plant mutations conflicted with Soviet ideology.  Dogma is historically an enemy of science.  And in the recent disputes over Intelligent Design theory, we see the unedifying picture of Dogma vs. Dogma.

 

          Intelligent Design adherents think that there is some kind of design, either “front-loaded” into lifeforms from the time of the beginning of life on this planet, or manipulated over history by some external, possibly Providential, force.  Intelligent Design is not Creationism, which maintains that the world, life included, in all its complexity was simultaneously brought into being as described in the Bible.  Intelligent Design is compatible with, not contradictory to, the predominant scientific consensus that this planet is billions of years old, and that life on it developed over time.  Those who oppose Intelligent Design seem as worried by the identity of those who support it – not the scientists but the social forces behind them – as by the theory itself.  It is an open secret that the Religious Right is much taken with Intelligent Design.  And given the frequent hostility of the Religious Right to much scientific research and teaching, to the free thinking fostered by political diversity, and to the separation of church and state, this is no surprise.  What is surprising and distressing is what appears to be uncharacteristic dogmatism on the other side.

 

          The confrontation between the Religious Right and much of the scientific community has largely played out so far in fights over the school curriculum. In resisting Intelligent Design the Natural Selection adherents sometimes seem as closed-minded as the Inquisitors who put down Galileo or Trofim Lysenko and his followers who set back Russian biology for half a century.

 

          The typical knock on Intelligent Design is that it is not science, but faith.  This remark is typically followed by the statement that science deals with what can be proved.  Actually science is about forming hypotheses to explain the natural world and attempting to come as close to proof as possible – which may not be very close at all.  Whenever we hypothesize about origins of the universe or of life within it, for instance, we are probably far from ever being able to prove anything.  We may be able to rule some things out – Creationism, for instance.  Creationism is so inconsistent with everything we know about physics, paleontology, geology, anthropology, and cosmology that the only way we can possibly reconcile it with science is to say that a God who created the world the way the Bible states planted a lot of false clues at the same time He was fashioning the firmament.  Creationism truly is not science but faith, in fact a faith that contradicts science.

 

          But ruling some things out is not the same as proving other things.  Natural Selection (to the extent it rests on random, undirected changes), is certainly widely accepted, but not proven.  And Intelligent Design is not only not disproven, as Creationism is, but seems intuitively to make much sense.  Intelligent Design merely suggests that random mutation is hard to reconcile with the complexity of the biological world, and that the existence of a directing intelligent force is a more probable and satisfying hypothesis.  This is not a matter of faith but of confronting the evidence without preconceptions.

 

          The false argument that Intelligent Design is faith is often followed by the argument that, in the alternative, Intelligent Design is metaphysics – the extrapolation of scientific data deep into the realm of the unprovable.  That is, if there is in fact a “watchmaker” assembling organisms in all their complexity, it is not something we can ever establish, and the organisms will be the same whatever the roots of their differentiation.  This would be fine if those who made the argument did not then turn around and state that Natural Selection is not metaphysics.  Actually it is just about equally metaphysical.  Its proponents assert that it is more “parsimonious” than Intelligent Design.  I am not a scientist, and cannot deeply evaluate this argument.  I would observe, though, that Natural Selection is a theory that nature took the long way around – that it went through a laborious process of uncoordinated trial and error and still arrived at a coordinated whole.  In other words, order arose from massive amounts of entropy rather than from smaller amounts of order.  So where does the parsimony really lie? 

 

          Let us assume for the sake of argument, however, that natural selection truly is more “parsimonious.”  It still does not make Natural Selection theory different in kind (i.e. non-metaphysical) from the Intelligent Design theory.  And scientific method with an inclination for parsimony is itself an unprovable metaphysical construct.  We cannot teach science without teaching a generous helping of metaphysics, or pursue science without making major metaphysical assumptions.  So the metaphysics stick is not really a good one for beating Intelligent Design with, either.

 

          This is all somewhat academic, so to speak, until it really becomes academic in the technical sense – a fight over the curriculum.  There was a celebrated fight recently in Montana leading to a contested school board election.  The Natural Selection-only candidates won.  But the issue will resurface.

 

          The separation of church and state mandated by the First Amendment means that the Biblical account of Creation cannot be taught in public schools except as a cultural and literary artifact.  Intelligent Design reportedly strikes much of the Religious Right as the next best thing, and professedly strikes much of the scientific community as a Trojan Horse for smuggling religion in.  But in fact Intelligent Design does not hypothesize a God at all — certainly not the Christian one; the intelligent force could just as easily be beings from elsewhere in the Universe or outside of it such as Arthur Clarke envisioned in 2001.  It could be something unthought-of and indescribable.  To treat Intelligent Design as if it were nothing more than disguised religious fundamentalism seems pretty fundamentalist itself.  And yes, there is such a thing as fundamentalism of the academy.  As a former academic I can attest to that.

 

          School systems should not be concerning themselves overmuch with which side of the culture wars wins or loses.  Instead the relevant question be whether Intelligent Design is such transparently bad science that it should not be taught, even as an alternative theory.  When I was a student, there were those who thought that the origins of the Universe lay in a “Big Bang” and those who thought that the Universe held to a “Steady State.”  And I was taught both theories.  More recent developments have reportedly left the Big Bangers in sole possession of the field, and my understanding is that Steady State is no longer taught, and that this is appropriate.  Likewise, in college geology I was taught Plate Tectonics as a theory, and I understand it is now simply taught as fact.  So there is continuing precedent for the proposition that merely allowing a theory into the curriculum does not guarantee any particular outcome to the scrutiny it receives once it arrives.

 

          There are those who say that Intelligent Design is such bad science that it belongs with Steady State and with whatever preceded Plate Tectonics and should not even be allowed in the door.  Because this issue is so culturally and emotionally freighted at this point in time, however, I would be suspicious of those in the scientific community who are so vociferous in insisting that the matter is closed.  The supporters of Intelligent Design certainly seem to include real scientists as well.

 

          Surely the better course would be to let Intelligent Design in for a while – and let the teachers and students have at the debate.  Maybe in a generation Intelligent Design will be as dead as Steady State — for the right reason, namely that the consensus of the scientific community is that not enough evidence supports Intelligent Design.  Maybe not.  But until that day arrives Intelligent Design deserves a fair hearing, no matter who supports it.

 

          What the whole process needs most of all is an absence of closed minds.  And among the things minds should be open to is the notion that there may be simultaneous and coordinated jumps in the evolutionary process, and these may be among the handles by which something intelligently steers the evolution of life.  That something may resemble or even be the traditionally-understood God — or something else entirely.  The discomfort of certain scientists with these possibilities should not deter us from inquiring whether the coordinated jumps on which Intelligent Design is based actually exist, and as to what may have caused them – without preconceptions.

 

          Science is supposed to consider possibilities and not rule them out because of a priori assumptions.  The Pope should not have allowed the Inquisition to silence Galileo in order to prevent the raising of questions about Ptolemaic cosmology, and scientists should not follow that regrettable papal example in order to silence those who claim there is evidence of a guiding force in the Universe.  Maybe the facts really do constitute evidence.  Letting all reasonable views be debated is good religion, good science, and good First Amendment policy.  When it comes to curriculum, it is intelligent design.

 

Copyright (c) Jack L. B. Gohn

 

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column

Inconvenient Laws

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column

Previous Broken Laws Column

Broken Laws: A Three-Part Series

Part III: Inconvenient Laws

Published in the Maryland Daily Record April 30, 2004

 

          It’s 3:00 a.m.  You’re driving on a desolate country road.  There’s not another soul around. You come to an intersection with another desolate country road.  There’s a red traffic light, but no red-light camera.  So if you drive on through, there will be no adverse consequences for you or anyone else.  Do you stop?

 

          If you do, you are obeying a law that has no immediate utility and imposes inconvenience.  You may be acting from a sense that it is prudent to stop for red lights every time, in order to avoid the smallest risk of apprehension or collision, because sometimes one is wrong in assessing the risk of such things.  This is probably a laudable impulse, but not interesting for present purposes.  The alternative is interesting, however: You may be stopping because you have a respect for the law that overrides the claims of your own convenience.  I have already disagreed with the claim that all laws, without exception, are morally entitled to compliance or even respect by us as citizens or as lawyers.  I have argued that the moral hold any particular law has on us as humans is secondary to the hold of all of our other moral priorities.

 

          But here we come to the area where I would contend a serious case can be made that a generalized respect for the law may be the appropriate guide to your conduct.  In many areas of life, and traffic laws are perhaps the finest illustration, there are few moral issues governing the specific contents of the law, but a great moral imperative that there be such contents.  For instance, there is really no moral component to the decision whether drivers should stick to the right side or the left side of the road.  (The British are not known for being more or less moral than we because of their lane choices.)  But lives will be lost unless everyone sticks to one agreed side or the other.  It may not matter morally at a busy intersection who yields to whom.  But it is vital that someone yield to someone, and that the rules regarding who yields to whom are well understood.  And traffic lights are well understood to regulate the yielding process.

 

          In short, this is an area where individual moral priorities that might conflict with the imperative to comply with the law have very little scope.  There may be moments when it morally intolerable that you should yield the right of way to anyone: if when you come to that stoplight at 3:00 a.m. you happen to be driving a fire truck to put out a barn fire, about the only vehicle you should have to pause for or yield to is an ambulance carrying the affected farmer to the hospital.  But mostly it will not come to that.  Usually when the traffic laws chafe, it is only a matter of inconvenience.

 

          And you know that with such laws – laws governing traffic, jaywalking, littering, panhandling, and the like – there is a high incidence of noncompliance.  People are out there running red lights, stepping between onrushing cars, spitting in the subway, dropping fast food wrappers on the sidewalk, not scooping up after their pooches, all the time.  Usually nothing really bad happens because such laws are broken.  But the net effect of this tidal wave of noncompliance is bad.  Our roads are more hazardous and certainly slower, our environment is more unsightly, and our lives are diminished, because people do not conform to these slightly inconvenient laws.

 

          Here, I would suggest, is the place where one’s level of compliance really establishes which team one is on.  The rule of law is of the most obvious and least debatable utility when it literally and figuratively sets the rules of the road.  If one is committed to the best outcomes for everyone, then one will accept the inconvenient restrictions on one’s freedom that such laws bring.  A high degree of noncompliance states unequivocally to the world that one places the gratification of one’s own shortest-term impulses above the long-term well-being of the community.  And it is extremely hard to justify.

 

          Even here there are no absolutes, of course.  For instance, in the streets of most of our cities, the rules of the game have been rigged against both pedestrians and efficiency.  Barrages of intermittent traffic with the right of way lasting two or three minutes are routinely hurled along urban ways, often leaving windows of opportunity for pedestrians legally to begin a crossing of only a few seconds.  Meanwhile, there are gaps in the intermittent traffic which actually allow safe crossing.  This oversimplifies, of course, but is it any wonder jaywalking is almost universal?  Is it any wonder police in most cities, who would not hesitate to cite a red-light runner, never write tickets for the “crime” of jaywalking?  The problem here is that what looks like a morally neutral policy choice has actually become a malign one.  Deference to drivers over pedestrians is so extreme that the laws supporting that deference have lost much legitimacy.

 

          The commons that our streets represent must be regulated, or none of us can enjoy them properly.  But it is hard for the centralized planning and mechanized controls we rely on as the legal mechanism to allocate the use of that commons to do so as efficiently as the marketplace that exists at every crosswalk.  So even here, there is some room for judgment, some room for a reasonable citizen to delegitimize the laws.  But not too much room. We need our traffic laws.

 

          In this brief survey, we have seen the paradox that the laws of the greatest importance may paradoxically have the least legitimacy, and laws of the smallest significance (traffic laws) may have the greatest.  But we have seen that in the end there are truly no absolutes, that every law’s legitimacy is always somewhat contingent, eternally awaiting ratification by a plebiscite which can never be held, which could never be fully binding without an unattainable full unanimity, and which would be somewhat outmoded a minute after it was held.  Living in the real world, however, we cannot do without the protection of the laws, certainly including the laws of greatest importance.  So we must act to some degree as if they were legitimate, even though we know they are not, or not fully, so.

 

          For lawyers, the demand to act as if our laws were all legitimate is more intense.  We cannot easily run with the hare and ride with the hounds.  Nor is it good for the system that we do so.  We cannot be harping on first principles all the time.  We must move on and make things work.  And things work best when we enhance the legal structure’s incomplete legitimacy with our own behavior.  But I would emphasize immediately that this rule is a matter of our behavior only.  It does not mean speaking no ill of the laws or of those who make and administer them.

 

          Indeed, it is one of the most powerful legitimizing features of our system that it permits, even encourages our criticism of the system itself.  Because we understand that all laws are, to varying degrees, broken from the start, we must strive continuously to mend them, to make them more worthy of us.  We do that by recasting them to approximate ever more closely what that hypothetical plebiscite would ratify.  It does not much aid the process, and may in fact hinder it, to foster an irrational reverence toward the laws that exist.  Such an irrational reverence – call it legal jingoism – is a common but unfortunate attitude.  If we are to see life steadily and see it whole, in E.M. Forster’s phrase, we must fight against the seeds of that jingoism in ourselves.  As Justice Holmes correctly observed, the law is not a “brooding omnipresence,” nor is it divinely ordained.  It is just an artifact, a way of organizing ourselves adopted by flawed humans.

 

          In the end, the laws are mainly written in people’s minds and hearts, not in the statute books or the case reports.  The written laws deserve reverence, in and of themselves, only to the extent they mirror the real law, that exists in us, the people.  When the laws are broken, that is, out of step with us, we lawyers in particular have a responsibility to recognize the brokenness and to try to mend it.

 

          It should and must be an ongoing struggle: eternally imperfect and unfinished people contending with eternally imperfect and unfinished laws.  It isn’t pretty, it isn’t orderly, predictable, or in keeping with any idealized civics-class visions.  But it is the hand we’ve been dealt.  We must make the most of it.

 

Copyright (c) Jack L. B. Gohn

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column

Previous Broken Laws Column

Debatable Laws

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column

Previous Broken Laws Column  Next Broken Laws Column

 

Broken Laws: A Three-Part Series

Part II: Debatable Laws

 

          Let’s say you want to smoke pot.  Unlike some, you have no moral problems with it.  You’re informed about the health risks, and, having weighed them with reasonable care, you decide the prospect of the fun outweighs the hazards.  Why not indulge?  It’s against the law, of course.  But what hold does a law that you see no point to have on you?  Not an easy question to answer, as it turns out.

 

          In the Declaration of Independence there is language we all know to the effect that governments “deriv[e] their just powers from the consent of the governed.”  And this is, as Jefferson so well expressed it, “self-evident.”  And if it’s true that the consent of the governed is indispensable to the “justice,” i.e. the legitimacy, of the “powers” behind the law, then, at first glance, the pot laws – along with all others — don’t seem to be the product of “just powers.”  One thing you know for sure is that no one ever asked you for your consent to this particular act of government power, or even to the constitutional scheme from which it emanates. 

         

          No one now alive was part of the electorate that ratified the U.S. Constitution.  Who knows what percentage of Marylanders alive today voted on the 1968 proposed amended Maryland Constitution?  Not too many, we can be sure.  With rare exceptions, voters never get asked directly whether they approve of a particular piece of legislation or court decision.  So in truth no one has ever asked us whether we even wished to be governed by the body of laws that govern us, let alone whether we have consented to the imposition of penalties for pot.  So if actual explicit consent of the governed is the criterion for the legitimacy of laws standing between you and your Acapulco Gold, there’s no moral reason to refrain. 

         

          There’s a potential response out there, what we might call consent-by-estoppel.  That is, we all benefit from the roads, the schools, the armed forces, the court system, the governmental regulation of trade and the environment, etc. and only if we choose to opt out of all those benefits do we have the right to say that the government does not legitimately govern us.  We could, if we chose, move to some deserted rock in the middle of the ocean and declare ourselves free of any government.  So goes the argument.  It’s not a very convincing argument, however.  We didn’t ask for the benefits, nor did we ask for them to be coupled with the whole system of demands upon us that the law makes, so why should we have to go to extraordinary lengths to avoid them in order not to be estopped?  Besides, why should everybody be estopped when no one save our legislators has been consulted?  Estoppel implies two parties, one who detrimentally relies, and another who induces the reliance.  But those who relied by setting up the system in the first place are all dead and gone.  There’s really no one there any more whose reliance should estop us.

 

          Of course the converse of the argument, i.e. the position that each of us has a right to be consulted on everything, is also unconvincing.  There is no practicable way to hold a rolling plebiscite on the legitimacy of the system and each of its emanations so that each person affected by it is given the individual choice to reject it. 

 

          So where does that really leave us?  We cannot, as a practical matter, ratify all the laws and the system, but we must have a system.  We are, in short, forced to act as if our system were properly legitimated, when in fact it holds at best only an approximation of legitimacy.  That approximation is provided by our legislators and judges who are deputized to act for us.  But again, the system under which they are deputized was never submitted to the living for ratification.  They do assure that laws are passed according to the rules of the constitutional game.  But since the living have never agreed to those rules, the mere consistency of the laws with those rules adds no legitimacy that wasn’t already there.  Let us be blunt, therefore.  When you get down to it, so long as you agree with Jefferson that the consent of the governed is necessary to the legitimacy of governments, the legitimacy of the laws passed by those governments is a matter of sheer unverified, unverifiable, and unstable convention.  Unstable, because at any point enough of the governed might change their minds to render a previously legitimate law illegitimate.  As Luigi Pirandello put it: Right You Are, If You Think You Are.

 

          Could Jefferson have been wrong?  Can there be legitimacy without consent?  Certainly the majority of governments over the ages have been justified by notions other than Jefferson’s.  But such governments have tended to be despotisms and theocracies.  It is doubtful those notions are even as palatable to us as Jefferson’s.  We probably have to take Jefferson as a starting point, like it or not.

 

          And so back to your craving for pot.  Your elected representatives have voted that you may not indulge; we have seen, however, that their directives are of uncertain legitimacy at best.  I say at best, because even if there were a plebiscite tomorrow on the legitimacy of our system, and everyone but you voted in favor of it, there would be one member of “the governed” who did not give his or her “consent”: you, a majority of one, in Thoreau’s famous phrase.  After the plebiscite, the system would be legitimate for everyone else, but it would still not be legitimate for you personally (unless you had consented in advance to be bound by the will of the majority).  In fact, however, the last such plebiscite was conducted over two hundred years ago and probably won’t ever be conducted again.  We literally have no idea whether a majority today would set up a system under which legislators possess the right to control intimate decisions like whether we put tetrahydrocannabinol into our own bloodstreams.

 

          It’s a reasonable step from these premises to the view that the legitimacy of any particular part of the law will therefore depend not on the way it was passed, but on the actual substance of that law.  And as it turns out, laws that restrict people’s pleasures are among the hardest to legitimate under a “consent of the governed” standard.  Take, by contrast, laws forbidding murder.  Most of us agree that there should be laws against murder, an agreement shared even by most of those who commit it.  (If you’ve ever represented anyone on death row, you know this.)  These laws really do command “the consent of the governed.”  Not so drug laws.  Typically, the only people who really agree with these laws are the ones who would not wish to disobey them, while the people whose behavior they seek to control truly do not give their consent.  (Which is why, to the very limited extent these laws are obeyed by those inclined not to, it requires a considerable expenditure of our limited police resources.) 

 

          Now I am not saying that it is wise or unwise, moral or immoral, for society to have laws against drugs.  I am merely questioning the legitimacy of such laws – although I would maintain that it is not usually wise or moral for a society to have too many laws that do not possess obvious legitimacy in the form of popular assent.

 

          Meanwhile, the degree of legitimacy a law commands is a constantly changing thing.  If as I maintain the true index of legitimacy is the breadth of popular support, including without exception support among those regulated or burdened by the law, then obviously at any given moment, the level of assent may drop below some critical mass.  This mass is not quantifiable; it’s more like pornography: you know it when you see it.  This, in essence, in exactly what the Supreme Court recently decided about the laws criminalizing homosexuality – that the support of the people for these laws had reached such a nadir that few states even had such laws on the books.  At that point, the right to be gay became protected by Due Process – a turnabout that enraged Justice Scalia, but showed a lot of common sense, as a matter of jurisprudence.  Due Process should protect you from laws that are illegitimate, whatever the formalities that attended their passage, even if those laws were legitimate only yesterday.

 

          By lighting up that joint, then, you are manifesting your lack of consent to the pot laws, and, pro tanto, delegitimizing them.  You are, if you will, sitting in one small judgment of the legislators who passed these laws.  From a moral standpoint, you may not feel you should.  You may feel that it is better that laws be changed through debate and through constitutional channels than through scorn and desuetude.  But then again you may not; this is truly a matter between you and your conscience.

 

          It is important to understand, as a matter of conscience, that what you do does have these implications.  Nibbling on those Alice B. Toklas brownies rejects the authority of the state to pass these laws, and substantively disrespects the legislature’s attempt to address concerns about public health, about the crime and public corruption that always attends the drug trade (separate from the crime of carrying on a drug trade itself), about the lives ruined by addiction, about the economic costs that drug intoxication imposes.  It is a dynamic act, placing your weight on the balance scale against the legitimacy of these laws and the concerns that motivated them.  You may well feel that this is acceptable.  Many do.  But whatever you do in this regard is not trivial.

 

          Thus it is with all debatable laws.  There are always serious policy arguments on both sides.  With pot laws, for instance, the flip side includes the value of individual autonomy, the medical benefits of marijuana for some, and the destructiveness of the war on drugs that has too often become a war on drug users, not to mention the absurdity of banning pot while licensing alcohol and tobacco. If you break debatable laws you assert your moral authority to enter your own voice in the debate, your unwillingness pro tanto to be a subject of the legislature or the courts, your status as their peer.

 

          And at that point there is no should about the issue.  You have to decide for yourself whether to assert that status.  God and history will judge.

 

          But now let’s change the hypothetical.  Let’s say you’re a lawyer, sworn to uphold the laws, all of them.  Even the ones you disagree with.  In my last column I suggested that there are some laws so bad that no oath of fealty, even a lawyer’s oath, can possibly require us to confess ourselves morally bound to them.  What I call debatable laws are not in that category.  I would suggest that being a lawyer should make a difference here.  The law may not really have the absolute moral authority some claim for it, but our body of laws does tender a structured and serious approach to ordering our society.  If we profess the law, we are accepting the tender, and selecting that structure as our primary template for moral and ethical choices.  Our professing the law goes deeper: we depend on the integrity of that structure for our daily dealings: advising clients, transacting their business, assisting them in their conflicts, agitating within the structure to change it.  Managing to possess personal and moral integrity is a challenge at all times.  Trying to do that, not to mention maintaining personal or professional credibility, while simultaneously supporting and undermining the legal structure, is almost impossible.  Either we profess the law or we do not.  And if we do, then except for the laws that are truly unforgivable, we need to conform to pretty much all the laws, if only for mental self-preservation.

 

          Of course, the crazier or sillier the law, the less the insult either to the law or to the lawyer’s psyche when the lawyer breaks it.  Many reasonable people consider the pot laws as silly as they come.  Their presence on the books itself arguably undermines the legal structure; an individual lawyer’s apostasy in toking up cannot possibly do as much harm, and may in rare cases advance the interests of the structure by delegitimizing the laws.  The interests of the structure can also be advanced at times by civil disobedience, e.g. violating trespass laws as a part of a political demonstration against other, arguably unjust laws or policies (the trespass laws themselves being generally undebatable).  But in general lawyers do good neither for themselves nor anyone else when they do not personally conform to the law.

 

          Next time: traffic lights.

 

Copyright (c) Jack L. B. Gohn

 

The Big Picture Home Page  Previous Big Picture Column  Next Big Picture Column

Previous Broken Laws Column  Next Broken Laws Column