Who Are Guardians and Do We Need Them?
Plato believed the best societies were crafted by the best people. He wanted citizens of good birth to be selected at infancy and raised as philosopher kings. He called these special people guardians.2
He assumed that because of their superior qualities and education, they would pass good laws, administer justice fairly, and look after the lesser citizens around them.
In this discussion, we’ll take Plato’s lead and call a guardian anyone who acts in the name of, on behalf of, or in place of another citizen or group of citizens, whether or not it is with their consent.
This latter distinction–guardianism by custom or default, and not by choice–is not always obvious, but it’s always significant. U.S. Congressional representatives, state legislators, and members of parliament, for example, are obviously guardians–we elect them for that purpose; but so is anyone who holds another person’s “power of attorney,” such as a relative making health-care decisions for a comatose loved one. On the other hand, guardian power does not extend to a lawyer representing a client in court. Although the lawyer is a “representative,” he or she is told by the client how to plead, even in criminal cases where life or death is at stake. The client also makes all other major decisions about a case, such as whether or not to accept a proffered plea bargain or a settlement in a civil suit.
In all these instances, the people “guarded” have a say about who will make these important decisions or, in the lawyer-client relationship and when citizens vote on a ballot initiative, they can make such decisions themselves. But what about those guardians we never meet, choose, or know about, but whose decisions still control major aspects of our lives?
Economically, the executives and board of directors of a corporation act as guardians for the stakeholders in that company–not just share owners and employees, but also customers who rely on (and sometimes trust their lives to) product quality, families who depend on employee paychecks, and neighbors who must live with those guardians’ decisions that result in noise, toxic waste, and traffic congestion. Similarly, the owners of real property (including proprietors of small businesses) are its guardians when it comes to determining its use and the disposition of its yield, regardless of the preferences of other stakeholders, such as tenants (who may make the property their home) or hired caretakers, who derive a living from its maintenance. Public statutes and business or environmental regulations–usually made by other guardians–may constrain or influence the actions of these decisionmakers, but it is the age-old laws and customs of property that make them platonic guardians.
This raises the crucial question of guardian functions versus guardian roles. A “guardian function” is merely stewardship. Every society must make practical choices–in general and in particular–about how to use its shared resources, how its citizens should behave, and how its rules should be enforced. Even in the most committed direct democracy, such as Athens in its Golden Age, not every citizen could have a meaningful say in all issues at all times, so key guardians–mostly executives–are selected to enforce the collective will. A “guardian role,” however, is something else. It reflects the platonic belief that guardianship is, and must be, performed by exceptional individuals–whether those exceptional qualities are inbred, acquired through education, a gift of God, an aristocratic right, or imputed by popular election. A jury or commission can perform the guardian function very well, but only a professional politician or landlord or patriarchal chieftain can fulfill the platonic guardian role.
Guardianism, therefore, is the philosophical belief that only one or a few people must control a society’s resources and regulate the behavior of its members, with or without their consent.
At the extreme non-consensual end of guardianism are warlords who seize power and property at the point of a sword. Their strength, they claim, is all the proof they need of their superior status and justification to rule.
At the more participative end of the spectrum are elected representatives who operate within the framework of a constitution. They come into power “by the consent of the governed,” as one historical document puts it, but that is the last time the governed need be consulted about anything. After election, representatives “represent” only themselves and are famously free to vote their own consciences–or anyone else’s.
Thus despots and elected representatives form one continuous arc of guardianism, the difference (though the latter is certainly preferable to the former) being only one of degree. While elected legislators have more moral authority than military dictators, both maintain their special status, and perform the guardianship function, by excluding everyone else from setting agendas and making binding decisions, reserving those vital processes for themselves.
Although Plato never saw his ideal republic in action, his beliefs have appealed greatly to those who wish to rule and those who are willing to be ruled by others. What made the whole notion of guardianism–at its heart so undemocratic–so seductive?
Guardians and their boosters rest their system on three major premises: that ordinary people lack the capacity (training, experience, maturity, etc.) to self-govern; that superior people (guardians in the platonic sense) make consistently better decisions than ordinary citizens; and that a majority of ordinary citizens, if allowed to govern themselves, will tyrannize the minority. Although accepted as gospel by autocrats and democrats alike, none of these rationales hold water.
Despite age-old guardian assertions that direct democracy is social suicide, experience suggests otherwise. Ordinary people can–and do, when given the chance–manage their individual and joint affairs quite nicely, and without the power of the sheriff to compel obedience to their shared decisions. Even Montaigne, no particular friend of participation, observed that while “storming a breach, conducting an embassy, ruling a nation are glittering deeds,” it is even more remarkable and difficult to “live together gently and justly with your household” and cited Aristotle (Nichomachian Ethics) when he said that “private citizens serve virtue as highly and with as much difficulty as those who hold office.”3
As it turns out, the vast majority of adults–all raised in a culture soaked in guardianism–conduct themselves as reasonable, responsible, and law-abiding citizens without a continual appeal to guardians. If they didn’t, our economy would be in shambles (few people would hold jobs–why work when you can steal?–or voluntarily pay their bills and taxes); our cities, suburbs, and farms would be in ruins (why maintain property when any band of thugs can take it away?), and every street corner would be dominated by bullies. Certainly, there are exceptions to this age-old pattern, as when societies collapse after a war or when a repressive regime is removed, but these upheavals are usually guardian-induced, or follow the fall of an entrenched guardian class, only proving the rule.
The bottom line here is that while guardian functions like tending infrastructure, foreseeing and accommodating collective needs, and resolving conflicts between groups or individuals are essential, there is nothing essential about having them performed by guardians–at least in the platonic sense. Indeed, as we will see, the performance of these functions through self-selected guardians (yes, even elected guardians are self-selected: they must decide to run for office) often promotes the kinds of contention and disorder guardianism is supposed to prevent.
The second pillar of guardianism holds that one or a few superior (well-educated, mature, and highly trained) people will make better decisions about most things–especially important things–most of the time, and should therefore enjoy positions of authority over their less gifted fellows; but this, too, flunks the tests of both logic and experience.
First, unless the people fulfilling guardian roles are super-beings from some alien planet and are not derived from the demos (the politically active population), the abilities of the few can never exceed the aggregate abilities of everyone–a logical impossibility. That is, if guardians are drawn from the same population they govern, their knowledge, skills, and judgment in all areas cannot exceed the knowledge, skills, and judgment of the entire population, of which they are a part, when that population acts on its own behalf. In fact, when guardians act in isolation, their abilities must necessarily be less. After all, no person is an expert in all things; and to become an expert in one or a few fields, it is necessary to forego education and experience in others. Further, part of becoming an expert is learning to view the world through an increasingly narrow focus, and this can lead to distorted perceptions–even arrogant disdain–for areas in which that expertise does not apply.
What guardians really mean when they say that ordinary people are not fit to govern themselves is that common citizens lack training and experience in guardianism–in mastering and exploiting a system that is designed to exclude others from sharing significant political and economic power and in brokering deals among that anointed few–and in this they are absolutely right.
The premise of the “flawed and unfit common man” is also refuted by statistics. Eighteenth-century philosopher and mathematician, the Marquis de Condorcet, offered quantitative proof that group-based decisions were usually superior to those arrived at by even highly qualified individuals.4 He assumed that average citizens were reasonably intelligent, but not infallible. This meant that, in the long run, their common-sense decisions–about who is guilty of a crime, for example, or what crops to plant, or whether a new toll road or dam should be built–would turn out to be the best choice at least half the time: say, a bare minimum of 51-percent. (If it were otherwise, and people failed more often than they succeeded, civilization would have never have gotten off the ground and we would still be living in caves.) Under the laws of probability, he reasoned, the chance of a beneficial outcome increases in proportion to the number of citizens deciding the question. In a group of 100 citizens, for example, the probability that 51 (a bare majority) will arrive at the best decision is a modest 52 percent. If the majority rises to 55 citizens, however, the probability rises to 60 percent. With a majority of 60 people, the probability zooms to 70 percent–a remarkable and significant improvement.
Hence, Condorcet argued that the best possible choice depends not just on democratic action, but on creating the largest possible demos and striving for consensus. But his analysis didn’t end there: as citizen education increased, so did the probability of their arriving at the best decision–and at a rate much faster than before. If education raised the average citizen’s chance of making a good decision by a mere 4 percent–say, from 51 to 55 percent–the probability that the same bare majority (51 citizens out of 100) would arrive at the best decision soars to an amazing 60 percent. This doesn’t mean that one or a few individuals couldn’t make a decision of the same quality, only that there is no statistical inference that they would–and when the realities of guardian politics and the psychology of ambitious rulers are considered, good reason to suppose that they would not.
“Condorcet’s Rule” seems to have been validated later by what became known as the “Flynn Effect,” in which cumulative IQ scores during the twentieth century rose steadily as mass education took root in industrialized nations. Today, the average American’s IQ score is equivalent to the top 2 percent of test-takers at the end of the 1800s–paralleling, coincidentally, the rise of, and demand for, direct citizen participation through state initiatives, referenda, and progressive management reforms that occurred during this same period.5
While Condorcet’s Rule and the Flynn Effect make a powerful case for more democratic processes, they also draw attention to the so-called super-majority problem wherein a die-hard minority can frustrate the will of a large majority when such super-majorities are required, such as the two-thirds majority required for Congress to override a presidential veto. Such rules, while intended to maximize the chance for a good decision, paradoxically increase the chance of making a bad one, since they give disproportionate power to what is sometimes a less informed and intransigent few. As it turns out, the root of the super-majority problem has nothing to do with statistics, IQs, or the competence of those casting the votes but can be traced to our time-honored–yet very damaging and dangerous–system of one-time, win-lose, binary voting, a subject we’ll examine shortly. Suffice it to say now that majority (or even super-majority) rule is not consensus; and when it is applied habitually in one-time, winner-take-all contests of power, the results differ only slightly from decisions made by guardians to the exclusion of everyone else.
This leads us to the guardian’s final argument for their own indispensability: the power of a citizen majority to oppress a minority.
Guardians love to equate mob rule with direct democracy, which is like comparing a street mugging to taxation. Both redistribute income but their processes are quite different, and neither mob action nor participation have anything to do with the power of an assertive majority–elected guardians or voters acting directly–to push around a minority. If a majority can pass laws to terrorize a minority, it makes no difference if that majority is comprised of elected representatives (a parliament, congress, or state legislature) or their constituents. What prevents any majority from abusing a minority is not guardianism but constitutional rights, procedural checks and balances, due process (in legislative as well as judicial functions), and the character of those who both make and live with the law–and these factors apply no matter how big the body of lawmakers gets.
In this regard, guardians forget that all the tools that help make representative government work are also available to help direct democracy succeed as well. As Robert Dahl reminds us, no legitimate government ever rightfully does everything it can do, any more than trustworthy police officers do all they might do simply because they have a badge and a gun.6
How, then, can we satisfy our need for guardian functions without surrendering ourselves to guardianism?
One answer lies in human nature.
Rousseau, one of the first modern champions of consent, wrote that we’re born free yet find ourselves everywhere in chains.7 Actually, the reverse is true. Human beings are born into abject dependence on a special set of guardians–our parents–then gradually outgrow our need for them. And what we experience individually, we yearn for collectively.
Parental substitutes–authority figures, rule-makers, and lawgivers of every kind, from teachers and police officers to bosses and bureaucrats–supervise each step of our journey to human maturity. They become our role models and, after we’ve achieved a degree of autonomy, many of us become guardians ourselves: business owners, executives, landlords, politicians, and of course, parents to children of our own. On the surface, this venerable cycle would seem to say something about the inevitability, if not desirability, of guardianism as a fundamental human institution; but again, habit does not always reflect necessity.
If we look at guardian roles as a hierarchy, with the most powerful guardians on top and the least powerful on the bottom, and accept both that guardians have guardians (every boss has a boss, even if it’s a committee of other guardians) and that guardians in different areas must often cooperate to achieve their goals, then human nature is actually more closely aligned with networks than with guardianism. In other words, although hierarchies of some kind are inevitable in human society (we will always admire and voluntarily defer to people of extraordinary merit, for example), a hierarchy of guardians–an exclusive, compulsory power structure wherein a few are given license to coerce the many–seems nowhere explicit. Such hierarchies are a dominant group preference, and while they may appear all too often, there is no compelling biological or sociological need for them. In fact, a rational look at how human beings actually mature–individually and together–make such non-consensual hierarchies seem not only unjustified, but also counterproductive and foolish.
The reason guardianism continues into adulthood–as it has throughout history–seems twofold. First, as we mature, we progress from total dependence on parents to interdependence with peers. Some of these peers, through indoctrination or inclination, exploit our prior dependence by assuming the role of surrogate parents, then conning–or bullying–the rest of us into believing it is good and necessary. While no one doubts that children need guardians, the object of maturation is autonomy, including the ability to live cooperatively with others. If we allow certain peers to assume quasi-parental authority over us, it is not because nature selected them to be bosses and lawmakers and the rest of us to remain child-like dependents, but because those more aggressive and guileful peers found it advantageous, rewarding–and possible–to suppress our natural drive toward autonomy and turn it into a dormant trait. Social networks, including voluntary hierarchies based on merit, benefit everyone. Guardian domination of these networks co-opts their benefits and reserves them for an anointed few. In exchange, guardians promise us parental protection and the bliss of an eternal childhood–even though we long ago shed such illusions and have otherwise assumed the responsibilities of full-fledged adults.8
Second, because human beings are social animals, a big part of growing up means learning how to make semi-autonomous and interdependent relationships work. We become moral people because morality facilitates cooperation and helps us build useful, mutually beneficial networks. As infants, we learn that the “fixed acts”9 rehearsed in the womb (a sucking reflex, for example, that allows us to nurse right after we’re born) must be augmented by more reasoned behavior when we encounter the wills of others–such as playmates who want the same toy. This is not simply expedience; it reflects an instinct to observe, understand, and interact with our own kind. This instinct for autonomy within a social setting seems “hard-wired” into the human brain and is activated by experience.10
The resulting childhood network–one still dominated by parents, but complemented now by a variety of peer relationships–becomes a system in which fairness and reciprocity means as much as selfishness and brute force. Eventually, as young adults, we realize that what began as a system of attitudes and behaviors rooted in selfishness has flowered into one that accommodates the shared interests of all. At some point, we see that these unofficial rules for living (what Oxford philosopher Derek Parfit calls, “common-sense morality”11), have certain universal qualities: that truth is better than falsehood, that honesty is better than deceit, that my right to swing my fist necessarily ends at your nose. We become moral creatures whose natural self-interest includes the reasonable interests of others. At that precise moment, we begin to shed our need for guardianism, yet guardians persist in our lives. This is not because we need guardians, but because guardians still need us.
In short, as we mature, the gratifications of childish tyranny and a desire to imitate our powerful parents give way to the larger rewards and subtler skills of reciprocity, self-restraint, and cooperation. We redefine right and wrong from idiosyncratic and selfish terms to those that promote individuality within a social setting. It is a leap to adulthood we make in the company of our peers, leaving behind only those who persist in childish dreams of centrality and pre-eminence: those, in other words, who yearn to be our guardians.
Is guardianism necessary? The answer is yes, but only for biological and emotional children. If we fail to establish guardian-free political and economic institutions as adults, it is because our natural growth toward shared autonomy was pruned before it bloomed, its function pre-empted by those who would forever be our parents.
- 2. Plato. The Republic. New York: Penguin. 1974.
- 3. Montaigne, Michel de. Translated by M.A. Screech. The Complete Essays. London; Penguin Books. 1987. 912.
- 4. Dahl, Robert A. Democracy and Its Critics. New Haven: Yale University Press. 1989. 142.
- 5. Eliot, Lise. What’s Going On in There? How the Brain and Mind Develop in the First Five Years of Life. New York: Bantam Books. 1999. 429.
- 6. Dahl. Democracy and Its Critics.
- 7. Rousseau, Jean-Jacques The Social Contract. New York: Hafner Publishing Co. 1947.
- 8. Television commentator and former presidential and congressional aid Chris Matthews wryly observes that because Democrats tend to espouse welfare and nurturing issues—ideas traditionally associated with motherhood—and Republicans emphasize more paternal issues like security and frugality, “We have a ‘mommy’ party and a ‘daddy’ party, each servicing its constituent voters.” (Matthews, Chris. Now, Let Me Tell You What I Really Think. New York: The Free Press. 2001.)
- 9. Hobson, J. Allen, MD. The Chemistry of Conscious States. Boston: Little, Brown 1994.
- 10. Eliot. 300.
- 11. Parfit, Derek. Reason and Persons. Oxford: Clarendon Press. 1984.
Back to Top