Guardianism and the Law

Throughout history, most laws were passed piecemeal to deal with specific problems as they arose, often in emergencies. Until what we might call the constitutional era–from the Magna Carta onward–there was no general framework against which new laws could be judged. “Legal memory,” an informal consensus among guardians about why old laws had been passed and why some worked and others didn’t, guided lawmakers, lawgivers, magistrates, and juries in interpreting and enforcing certain statutes. Interpretations or new laws that drifted too far from this consensus provoked guardian or citizen resistance. Our ancestors may not have been legal experts, but they knew which laws they liked.

One idea central to constitutions is that any law, no matter how powerful its maker, is subject to review against some higher, transcending authority. Unfortunately, to guarantee their monopoly on coercive power, sovereign guardians such as kings–Hobbes’ leviathan incarnate—often had to exempt themselves from the laws they made for others. They became, in essence, citizens with unlimited rights; vessels into which citizen-dependents deposited their natural rights so that their guardian-protectors could exercise those rights on their behalf.40

The leviathan concept satisfied most people most of the time, but certain nobles (who often had more wealth and power than the king they served), sometimes chafed under a sovereign’s arbitrary laws. Gradually, these nobles demanded not only more rights, but that the king himself should behave as if those laws, particularly the ones protecting individual rights in property, bound the monarchy as well. This was the essence of the Magna Carta: the “great charter” that became, more or less, the basis for the constitutions of virtually all English-speaking countries.

As English society evolved and significant property took forms other than landed estates, rights originally applied to property were applied in other areas. Liberties became attached to individuals, not just to positions and hereditary titles–a crucial step toward true democracy as well as republicanism. By the sixteenth century, the “unwritten” part of Britain’s constitution had become a significant part of its law. Political rights meant not just access by ordinary people to the processes by which collective decisions were made (mostly through the election of parliamentary representatives), but access to state protection when the owners of property went too far in exploiting tenants and workers.

The real significance of this evolution had less to do with grass-roots democratic empowerment than the linking of personal liberty, property, and political choice into a seamless whole. In the constitutional age, individual rights–civil and legal–became alternate ways for creating, allocating, and distributing wealth, as well as making political choices. Just as important, the forced sharing of rights previously held exclusively by those at the top of the guardian pyramid tended to undermine the principle of guardianism itself. This weakening of what had previously been a rather monolithic power structure–guardians over dependents–gave rise not only to more democratic institutions, but an appetite for democracy that has steadily increased.

Of course, democratic leveling did not eliminate all social, economic, and political injustice, nor did it seriously threaten guardianism. Possessing a right did not guarantee one’s ability to use it. In many cases, oppressive capitalist relationships merely replaced oppressive feudal relationships. But increased freedom of opportunity did increase the amount of participation and consent available at any given level in society. Depending on the international and economic situation, individual rights might be emphasized in one era, while collective rights might be emphasized in another. Thus, democratic participation in a rights-oriented society seldom experiences equilibrium for long; rather it oscillates continually, like prices in a stock market, losing ground one day but making up for it the next, compounding its returns over many years. In the very long run, if the underlying constitution is strong, the general level of meaningful participation rises steadily, its swings amplified or dampened by the actions of its guardians.

This raises the all-important question of “legal positivism” versus “legal realism”: how much power should judicial guardians have in unilaterally interpreting a constitution that is meant to apply to everyone?

At the time the U.S. Constitution was framed, guardians thought the law was, and ought to be, objective and politically neutral–an idea called legal positivism. That is, the law could and should be understood in its own terms, according to timeless legal principles in the same way scientists understand nature by discovering and applying natural laws. A century and a half after the U.S. Constitution was ratified, however, both a great civil war and a devastating economic depression had changed this view considerably. Public guardians in a powerful central government now imagined its social contract was not with the demos as a whole, but with specific groups of citizen-dependents, as well as traditional economic guardians. Legal realism, as this idea was known, had by the end of the 1930s largely replaced a belief in legal positivism in the minds of liberal theorists. While continuing to dress the law in trappings of objectivity, proponents of legal realism knew that most real-world judges exercised great power in deciding which legal precedents and principles would apply in important cases. These judges also determined how those principles and precedents were interpreted, drawing ideas from such extra-legal disciplines as psychology, sociology, and political science. Just as important, their decisions determined how law and legal theory would be taught in law schools, influencing future generations of public and private guardians.

This new view of constitutions and legality, also called sociological jurisprudence, held that laws are created mostly as a result of struggles between competing interests. For example, many laws created to defend guardian hierarchies (such as hereditary aristocrats and their rights in property) gave way, at the end of the 1700s, to those protecting the rights of individuals, regardless of rank or class. By the nineteenth century, this struggle was waged mostly in markets (largely in the form of contract law) and the institutions of representative democracy. By the late twentieth century, identity groups and special interests–especially economic cartels–had co-opted many of these rights to serve their own purposes. The result has been a flood of “judicial activism” that both conservative and liberal guardians attack or applaud, depending on whose ox is being gored. By the early twenty-first century, many courts no longer arbitrate disputes under the law, but have become a very streamlined and relatively low-cost way of making law–even influencing the course of major elections, as in the hotly contested presidential election of the year 2000. In this instance, the decisions of an arguably partisan Florida state supreme court were reversed by an equally partisan U.S. Supreme Court, giving the presidency to a candidate who had lost the nation’s popular vote. While no real corruption was alleged, the losers complained that the winner “stole” the election (meaning he won it undemocratically) while the winners said the system worked the way it was designed. Both, of course, were right because the system–especially the archaic Electoral College–is designed to be undemocratic.

Most political guardians have come to depend on judicial activism to protect and advance their programs–some more openly than others. Even before the 2000 Florida election debacle, California governor Gray Davis a public guardian for most of his adult life, startled observers by unashamedly declaring that the judges he appoints “...should reflect my views. They are not there to be independent agents,” complementing an earlier statement that the job of the state legislature is “to implement my vision.”41According to California’s chief political guardian at the time, apparently, l’etat c’est moi. Such attitudes not only politicize the legal system, they also make the political system itself more litigious–less trustworthy and less trusted.

In the Gilded Age of America’s industrial robber barons, economic arch-guardians like Andrew Carnegie and Henry Lee Higginson knew that the demos could not be held at bay indefinitely by legalisms and social palliatives. They believed that the proper administration of wealth was the central problem of their day and that, simply because a citizen held title to a property, he could not do with it as he pleased. Along with other philanthropic leaders, they concluded that the ideals of service, stewardship, and cooperation should guide a community’s decision about what to do with its resources. Such views challenged the classical theory of markets, including the theory of rents, that had guided judicial realism up to that time. They acknowledged that, historically, all civilizations eventually face a crisis due to the need for land reform–to reconcile domain with dominion.

America postponed that crisis for almost two hundred years by annexing new territory, expanding opportunities for home ownership, and then–when all else failed–extending certain rights traditionally associated with property to non-propertied citizens. However, none of these “solutions” have thrived in the economic and political order of the twenty-first century, which includes the rapid growth of multi-cultural populations and the expansion of global corporatism–developments that have caused many to wonder if there isn’t a better, fairer, and more natural way for human beings to govern themselves.

  1. 40. Johnston, David. The Rhetoric of Leviathan: Thomas Hobbes and the Politics of Cultural Transformation. Princeton: Princeton University Press. 1986.
  2. 41. Washington Bureau. “Davis Wants His Judges to Stay in Line,” San Francisco Chronicle, March 1, 2000.

Back to Top