Onze site gebruikt cookies om diensten te leveren, prestaties te verbeteren, voor analyse en (indien je niet ingelogd bent) voor advertenties. Door LibraryThing te gebruiken erken je dat je onze Servicevoorwaarden en Privacybeleid gelezen en begrepen hebt. Je gebruik van de site en diensten is onderhevig aan dit beleid en deze voorwaarden.
In "The Blank Slate", Steven Pinker, one of the world's leading experts on language and the mind, explores the idea of human nature and it's moral, emotional, and political colorings. With characteristic wit, lucidity, and insight, Pinker argues that the dogma that the mind has no innate traits, a doctrine held by many intellectuals during the past century, denies our common humanity and our individual perferences, replaces objective analyses of social problems with feel-good slogans, and distorts our understanding of politics, violence, parenting, and the arts.… (meer)
Overview: Each society has a theory of human nature, which is rarely referenced, but shapes beliefs and policies. Theories of human behavior range from mainly genetics to mainly social constructs. Human behavior is shaped by nature and nurture. Evolution coded in genetics has limited resources and ability to anticipate various complex situations, therefore cannot do everything. Nurture coded in social constructs have physical limitations, therefore cannot do everything. There are contexts which can be explained with mainly nurture or nature. Social constructs like language are nurture, while genetic disorders are nature. Within most contexts, nature and nurture work together. There are complex interactions between genes and their environment.
The blank slate is a reference to an extreme nurture view of the human mind. That the human mind has no inherent structure, in which society and the individual can inscribe values. Differences in behavior come about through differences in experiences. By changing the experiences, can the individual change. This implies that problematic behavior can be ameliorated. There are limitations within this perspective. Blank slates cannot do anything because they would not have the innate circuitry for learning or understanding. While culture shapes thought, thought could not come about without a biological entity capable of learning. Humans are biologically distinguishable, and are constrained in their choices.
More On Evolution, and the Blank Slate: Genes cannot provide a complete blueprint. They have limited resources, which means can only be so big. Genes cannot anticipate the complexity of the environment and behavior of other genes. To compensate, genes have developed a program that enables learning mechanisms such as feedback, which generate information with which to adjust behavior.
There are many limitations of the blank slate perspective. The mind creates a model of the world, but the model is based on the physical world. It takes a perceiver with information to decipher patters, combine patters with priorly learned patterns, and use them to obtain new thoughts that guide behavior.
Humans have the capacity to learn, and interpret information if a myriad of ways. With finite information processing, can an infinite range of behavior be generated. Culture is a cumulative pool of information that enables coordination of expectations about each other’s behavior. Genes do not create cultures, but cultures do not impact formless minds.
Recognition of biological differences has caused many unfavorable conclusions such as prejudice, Social Darwinism, and eugenics. Biological constraints prevent complete reshaping of human behavior, and can be seen as deterministic.
Caveats? The claims about nature and nurture are sensitive topics, which garner controversy. The author attempts to provide a more appropriate and neutral explanation for how they shape human society. The problem is that the way in which the book is written, is not favorable to neutrality. ( )
It's not often that I read a book and can honestly say it changed my outlook on life, but The Blank Slate is definitely one of them (trite, I know). In the book, Steven Pinker analyzes the current concepts of human nature, of culture and heritability, and reveals why in both popular and intellectual circles the prevailing viewpoints are flawed, and indeed detrimental to both research and society.
Brief summary: The first section of the book introduces Pinker's three fallacies, which contribute to our misunderstanding of human nature. These are the Blank Slate, the tabula rasa, which considers each human being to be a moldable clay at birth, intrinsically shaped by their environment and culture.( )
Authors feels that the blank slate, noble savage and ghost in the machine are inaccurate views of human nature and that further research on human nature and acceptance of the results of the research is important.
I really should not even count this book as read, since I mostly gave up halfway through. I did skip ahead and read the chapter on "Gender", but that was as annoying as the rest. Boring, verbose, pedantic....and BORING; this book made me realize that the few hours I did spend on it is time that I'll never get back. The only good thing about it was that I used to read it right before I went to bed, and when it bored me to exhaustion I'd fall asleep. Next time I'll try Nyquil or something. UGH.
Simply excellent. Such a wide-ranging, eye-opening and thorough review and analysis of the developments that are critical to our understanding of humans, what forges and drives them and what that might mean. An essential read, for anyone and everyone. ( )
Informatie afkomstig uit de Engelse Algemene Kennis.Bewerk om naar jouw taal over te brengen.
To Don, Judy, Leda, and John
Eerste woorden
Informatie afkomstig uit de Engelse Algemene Kennis.Bewerk om naar jouw taal over te brengen.
Preface "Not another book on nature and nurture! Are there really people out there who still believe that the mind is a blank slate?"
PART I
THE BLANK SLATE, THE NOBLE SAVAGE, AND THE GHOST IN THE MACHINE
Everyone has a theory of human nature.
The denial of human nature has spread beyond the academy and has led to a disconnect between intellectual life and common sense. I first had the idea of writing this book when I started a collection of astonishing claims from pundits and social critics about the malleability of the human psyche: that little boys quarrel and fight because they are encouraged to do so; that children enjoy sweets because their parents use them as a reward for eating vegetables; that teenagers get the idea to compete in looks and fashion from spelling bees and academic prizes; that men think the goal of sex is an orgasm because of the way they were socialized. The problem is not just that these claims are preposterous but that the writers did not acknowledge they were saying things that common sense might call into question. This is the mentality of a cult, in which fantastical beliefs are flaunted as proof of one’s piety. That mentality cannot coexist with an esteem for the truth, and I believe it is responsible for some of the unfortunate trends in recent intellectual life. One trend is a stated contempt among many scholars for the concepts of truth, logic, and evidence. Another is a hypocritical divide between what intellectuals say in public and what they really believe. A third is the inevitable reaction: a culture of “politically incorrect” shock jocks who revel in anti-intellectualism and bigotry, emboldened by the knowledge that the intellectual establishment has forfeited claims to credibility in the eyes of the public.
Finally, the denial of human nature has not just corrupted the world of critics and intellectuals but has done harm to the lives of real people. The theory that parents can mold their children like clay has inflicted childrearing regimes on parents that are unnatural and sometimes cruel. It has distorted the choices faced by mothers as they try to balance their lives, and multiplied the anguish of parents whose children haven’t turned out the way they hoped. The belief that human tastes are reversible cultural preferences has led social planners to write off people's enjoyment of ornament, natural light, and human scale and force millions of people to live in drab cement boxes. The romantic notion that all evil is a product of society has justified the release of dangerous psychopaths who promptly murdered innocent people. And the conviction that humanity could be reshaped by massive social engineering projects led to some of the greatest atrocities in history.
Citaten
Informatie afkomstig uit de Engelse Algemene Kennis.Bewerk om naar jouw taal over te brengen.
The moral, then, is that familiar categories of behavior—marriage customs, food taboos, folk superstitions, and so on—certainly do vary across cultures and have to be learned, but the deeper mechanisms of mental computation that generate them may be universal and innate. People may dress differently, but they may all strive to flaunt their status via their appearance. They may respect the rights of the members of their clan exclusively or they may extend that respect to everyone in their tribe, nation-state, or species, but all divide the world into an in-group and an out-group. They may differ in which outcomes they attribute to the intentions of conscious beings, some allowing only that artifacts are deliberately crafted, others believing that illnesses come from magical spells cast by enemies, still others believing that the entire world was brought into being by a creator. But all of them explain certain events by invoking the existence of entities with minds that strive to bring about goals. The behaviorists got it backwards: it is the mind, not behavior, that is lawful.
Moreover, many of the traits affected by genes are far from noble. Psychologists have discovered that our personalities differ in five major ways: we are to varying degrees introverted or extroverted, neurotic or stable, incurious or open to experience, agreeable or antagonistic, and conscientious or undirected. Most of the 18,000 adjectives for personality traits in an unabridged dictionary can be tied to one of these five dimensions, including such sins and flaws as being aimless, careless, conforming, impatient, narrow, rude, self-pitying, selfish, suspicious, uncooperative, and undependable. All five of the major personality dimensions are heritable, with perhaps 40 to 50 percent of the variation in a typical population tied to differences in their genes. The unfortunate wretch who is introverted, neurotic, narrow, selfish, and undependable is probably that way in part because of his genes, and so, most likely, are the rest of us who have tendencies in any of those directions as compared with our fellows.
It’s not just unpleasant temperaments that are partly heritable, but actual behavior with real consequences. Study after study has shown that a willingness to commit antisocial acts, including lying, stealing, starting fights, and destroying property, is partly heritable (though like all heritable traits it is exercised more in some environments than in others). People who commit truly heinous acts, such as bilking elderly people out of their life savings, raping a succession of women, or shooting convenience store clerks lying on the floor during a robbery, are often diagnosed with “psychopathy” or “antisocial personality disorder.” Most psychopaths showed signs of malice from the time they were children. They bullied smaller children, tortured animals, lied habitually, and were incapable of empathy or remorse, often despite normal family backgrounds and the best efforts of their distraught parents. Most experts on psychopathy believe that it comes from a genetic predisposition, though in some cases it may come from early brain damage. In either case genetics and neuroscience are showing that a heart of darkness cannot always be blamed on parents or society.
The difference between proximate and ultimate goals is another kind of proof that we are not blank slates. Whenever people strive for obvious rewards like health and happiness, which make sense both proximately and ultimately, one could plausibly suppose that the mind is equipped only with a desire to be happy and healthy and a cause-and-effect calculus that helps them get what they want. But people often have desires that subvert their proximate wellbeing, desires that they cannot articulate and that they (and their society) may try unsuccessfully to extirpate. They may covet their neighbor’s spouse, eat themselves into an early grave, explode over minor slights, fail to love their stepchildren, rev up their bodies in response to a stressor that they cannot fight or flee, exhaust themselves keeping up with the Joneses or climbing the corporate ladder, and prefer a sexy and dangerous partner to a plain but dependable one. These personally puzzling drives have a transparent evolutionary rationale, and they suggest that the mind is packed with cravings shaped by natural selection, not with a desire for personal wellbeing.
Counting societies instead of bodies leads to equally grim figures. In 1978 the anthropologist Carol Ember calculated that 90 percent of hunter-gatherer societies are known to engage in warfare, and 64 percent wage war at least once every two years. Even the 90 percent figure may be an underestimate, because anthropologists often cannot study a tribe long enough to measure outbreaks that occur every decade or so (imagine an anthropologist studying the peaceful Europeans between 1918 and 1938). In 1972 another anthropologist, W. T. Divale, investigated 99 groups of hunter-gatherers from 37 cultures, and found that 68 were at war at the time, 20 had been at war five to twenty-five years before, and all the others reported warfare in the more distant past. Based on these and other ethnographic surveys, Donald Brown includes conflict, rape, revenge, jealousy, dominance, and male coalitional violence as human universals.
It is, of course, understandable that people are squeamish about acknowledging the violence of pre-state societies. For centuries the stereotype of the savage savage was used as a pretext to wipe out indigenous peoples and steal their lands. But surely it is unnecessary to paint a false picture of a people as peaceable and ecologically conscientious in order to condemn the great crimes against them, as if genocide were wrong only when the victims are nice guys.
HISTORY AND CULTURE, then, can be grounded in psychology, which can be grounded in computation, neuroscience, genetics, and evolution. But this kind of talk sets off alarms in the minds of many nonscientists. They fear that consilience is a smokescreen for a hostile takeover of the humanities, arts, and social sciences by philistines in white coats. The richness of their subject matter would be dumbed down into a generic palaver about neurons, genes, and evolutionary urges. This scenario is often called “reductionism,”…
All this should be obvious, but nowadays any banality about learning can be dressed up in neurospeak and treated like a great revelation of science. According to a New York Times headline, “Talk therapy, a psychiatrist maintains, can alter the structure of the patient’s brain.” I should hope so, or else the psychiatrist would be defrauding her clients. “Environmental manipulation can change the way [a child’s] brain develops,” the pediatric neurologist Harry Chugani told the Boston Globe. “A child surrounded by aggression, violence, or inadequate stimulation will reflect these connections in the brain and behavior.” Well, yes; if the environment affects the child at all, it would do so by changing connections in the brain. A special issue of the journal Educational Technology and Society was intended “to examine the position that learning takes place in the brain of the learner, and that pedagogies and technologies should be designed and evaluated on the basis of the effect they have on student brains.” The guest editor (a biologist) did not say whether the alternative was that learning takes place in some other organ of the body like the pancreas or that it takes place in an immaterial soul. Even professors of neuroscience sometimes proclaim “discoveries” that would be news only to believers in a ghost in the machine: “Scientists have found that the brain is capable of altering its connections. . . . You have the ability to change the synaptic connections within the brain.” Good thing, because otherwise we would be permanent amnesiacs.
This neuroscientist is an executive at a company that “uses brain research and technology to develop products intended to enhance human learning and performance,” one of many new companies with that aspiration. “The human being has unlimited creativity if focused and nurtured properly,” says a consultant who teaches clients to draw diagrams that “map their neural patterns.” “The older you get, the more connections and associations your brain should be making,” said a satisfied customer; “Therefore you should have more information stored in your brain. You just need to tap into it.” Many people have been convinced by the public pronouncements of neuroscience advocates—on the basis of no evidence whatsoever—that varying the route you take when driving home can stave off the effects of aging. And then there is the marketing genius who realized that blocks, balls, and other toys “provide visual and tactile stimulation” and “encourage movement and tracking,” part of a larger movement of “brain-based” childrearing and education…
These companies tap into people’s belief in a ghost in the machine by implying that any form of learning that affects the brain (as opposed, presumably, to the kinds of learning that don’t affect the brain) is unexpectedly real or deep or powerful. But this is mistaken. All learning affects the brain. It is undeniably exciting when scientists make a discovery about how learning affects the brain, but that does not make the learning itself any more pervasive or profound.
The doctrine of extreme plasticity has used the plasticity discovered in primary sensory cortex as a metaphor for what happens elsewhere in the brain. The upshot of these two sections is that it is not a very good metaphor. If the plasticity of sensory cortex symbolized the plasticity of mental life as a whole, it should be easy to change what we don’t like about ourselves or other people. Take a case very different from vision, sexual orientation. Most gay men feel stirrings of attraction to other males around the time of the first hormonal changes that presage puberty. No one knows why some boys become gay—genes, prenatal hormones, other biological causes, and chance may all play a role—but my point is not so much about becoming gay as about becoming straight. In the less tolerant past, unhappy gay men sometimes approached psychiatrists (and sometimes were coerced into approaching them) for help in changing their sexual orientation. Even today, some religious groups pressure their gay members to “choose” heterosexuality. Many techniques have been foisted on them: psychoanalysis, guilt mongering, and conditioning techniques that use impeccable fire-together-wire-together logic (for example, having them look at Playboy centerfolds while sexually aroused). The techniques are all failures. With a few dubious exceptions (which are probably instances of conscious self-control rather than a change in desire), the sexual orientation of most gay men cannot be reversed by experience. Some parts of the mind just aren’t plastic, and no discoveries about how sensory cortex gets wired will change that fact.
The human genome contains an enormous amount of information, both in the genes and in the noncoding regions, to guide the construction of a complex organism. In a growing number of cases, particular genes can be tied to aspects of cognition, language, and personality. When psychological traits vary, much of the variation comes from differences in genes: identical twins are more similar than fraternal twins, and biological siblings are more similar than adoptive siblings, whether reared together or apart. A person’s temperament and personality emerge early in life and remain fairly constant throughout the lifespan. And both personality and intelligence show few or no effects of children’s particular home environments within their culture: children reared in the same family are similar mainly because of their shared genes.
Finally, neuroscience is showing that the brain’s basic architecture develops under genetic control. The importance of learning and plasticity notwithstanding, brain systems show signs of innate specialization and cannot arbitrarily substitute for one another.
The decimation of native Americans by European disease and genocide over five hundred years is indeed one of the great crimes of history. But it is bizarre to blame the crime on a handful of contemporary scientists struggling to document their lifestyle before it vanishes forever under the pressures of assimilation. And it is a dangerous tactic. Surely indigenous peoples have a right to survive in their lands whether or not they—like all human societies—are prone to violence and warfare. Self-appointed “advocates” who link the survival of native peoples to the doctrine of the Noble Savage paint themselves into a terrible corner. When the facts show otherwise they either have inadvertently weakened the case for native rights or must engage in any means necessary to suppress the facts.
NO ONE SHOULD be surprised that claims about human nature are controversial. Obviously any such claim should be scrutinized and any logical and empirical flaws pointed out, just as with any scientific hypothesis. But the criticism of the new sciences of human nature went well beyond ordinary scholarly debate. It turned into harassment, slurs, misrepresentation, doctored quotations, and, most recently, blood libel. I think there are two reasons for this illiberal behavior.
One is that in the twentieth century the Blank Slate became a sacred doctrine that, in the minds of its defenders, had to be either avowed with a perfect faith or renounced in every aspect. Only such black-and-white thinking could lead people to convert the idea that some aspects of behavior are innate into the idea that all aspects of behavior are innate, or convert the proposal that genetic traits influence human affairs into the idea that they determine human affairs. Only if it is theologically necessary for 100 percent of the differences in intelligence to be caused by the environment could anyone be incensed over the mathematical banality that as the proportion of variance due to nongenetic causes goes down, the proportion due to genetic causes must go up. Only if the mind is required to be a scraped tablet could anyone be outraged by the claim that human nature makes us smile, rather than scowl, when we are pleased.
The doctrine of the Pronoun in the Machine is not a casual oversight in the radical scientists’ world view. It is consistent with their desire for radical political change and their hostility to “bourgeois” democracy. (Lewontin repeatedly uses “bourgeois” as an epithet.) If the “we” is truly unfettered by biology, then once “we” see the light we can carry out the vision of radical change that we deem correct. But if the “we” is an imperfect product of evolution—limited in knowledge and wisdom, tempted by status and power, and blinded by self-deception and delusions of moral superiority—then “we” had better think twice before constructing all that history. …constitutional democracy is based on a jaundiced theory of human nature in which “we” are eternally vulnerable to arrogance and corruption. The checks and balances of democratic institutions were explicitly designed to stalemate the often dangerous ambitions of imperfect humans.
The longest-standing right-wing opposition to the sciences of human nature comes from the religious sectors of the coalition, especially Christian fundamentalism. Anyone who doesn’t believe in evolution is certainly not going to believe in the evolution of the mind, and anyone who believes in an immaterial soul is certainly not going to believe that thought and feeling consist of information processing in the tissues of the brain.
The religious opposition to evolution is fueled by several moral fears. Most obviously, the fact of evolution challenges the literal truth of the creation story in the Bible and thus the authority that religion draws from it. As one creationist minister put it, “If the Bible gets it wrong in biology, then why should I trust the Bible when it talks about morality and salvation?”
But the opposition to evolution goes beyond a desire to defend biblical literalism. Modern religious people may not believe in the literal truth of every miracle narrated in the Bible, but they do believe that humans were designed in God’s image and placed on earth for a larger purpose—namely, to live a moral life by following God’s commandments. If humans are accidental products of the mutation and selection of chemical replicators, they worry, morality would have no foundation and we would be left mindlessly obeying biological urges. One creationist, testifying to this danger in front of the U.S. House Judiciary Committee, cited the lyrics of a rock song: “You and me baby ain’t nothin’ but mammals / So let’s do it like they do it on the Discovery Channel” After the 1999 lethal rampage by two teenagers at Columbine High School in Colorado, Tom Delay, the Republican Majority Whip in the House of Representatives, said that such violence is inevitable as long as “our school systems teach children that they are nothing but glorified apes, evolutionized out of some primordial soup of mud.
In more realistic circumstances we have to decide on a case-by-case basis whether the discrimination is justifiable. Denying driving and voting rights to young teenagers is a form of age discrimination that is unfair to responsible teens. But we are not willing to pay either the financial costs of developing a test for psychological maturity or the moral costs of classification errors, such as teens wrapping their cars around trees. Almost everyone is appalled by racial profiling—pulling over motorists for “driving while black.” But after the 2001 terrorist attacks on the World Trade Center and the Pentagon, about half of Americans polled said they were not opposed to ethnic profiling—scrutinizing passengers for “flying while Arab”. People who distinguish the two must reason that the benefits of catching a marijuana dealer do not outweigh the harm done to innocent black drivers, but the benefits of stopping a suicide hijacker do outweigh the harm done to innocent Arab passengers. Cost-benefit analyses are also sometimes used to justify racial preferences: the benefits of racially diverse workplaces and campuses are thought to outweigh the costs of discriminating against whites.
The possibility that men and women are not the same in all respects also presents policymakers with choices. It would be reprehensible for a bank to hire a man over a woman as a manager for the reason that he is less likely to quit after having a child. Would it also be reprehensible for a couple to hire a woman over a man as a nanny for their daughter because she is less likely to sexually abuse the child? Most people believe that the punishment for a given crime should be the same regardless of who commits it. But knowing the typical sexual emotions of the two sexes, should we apply the same punishment to a man who seduces a sixteen-year-old girl and to a woman who seduces a sixteen-year-old boy?
These are some of the issues that face the people of a democracy in deciding what to do about discrimination. The point is not that group differences may never be used as a basis for discrimination. The point is that they do not have to be used that way, and sometimes we can decide on moral grounds that they must not be used that way.
But more important, even if inherited talents can lead to socioeconomic success, it doesn’t mean that the success is deserved in a moral sense. Social Darwinism is based on Spencer’s assumption that we can look to evolution to discover what is right—that “good” can be boiled down to “evolutionarily successful.” This lives in infamy as a reference case for the “naturalistic fallacy”: the belief that what happens in nature is good. (Spencer also confused people’s social success—their wealth, power, and status—with their evolutionary success, the number of their viable descendants.) The naturalistic fallacy was named by the moral philosopher G. E. Moore in his 1903 Principia Ethica, the book that killed Spencer’s ethics. Moore applied “Hume’s Guillotine,” the argument that no matter how convincingly you show that something is true, it never follows logically that it ought to be true. Moore noted that it is sensible to ask, “This conduct is more evolutionarily successful, but is it good?” The mere fact that the question makes sense shows that evolutionary success and goodness are not the same thing.
Can one really reconcile biological differences with a concept of social justice? Absolutely. In his famous theory of justice, the philosopher John Rawls asks us to imagine a social contract drawn up by self-interested agents negotiating under a veil of ignorance, unaware of the talents or status they will inherit at birth—ghosts ignorant of the machines they will haunt. He argues that a just society is one that these disembodied souls would agree to be born into, knowing that they might be dealt a lousy social or genetic hand. If you agree that this is a reasonable conception of justice, and that the agents would insist on a broad social safety net and redistributive taxation (short of eliminating incentives that make everyone better off), then you can justify compensatory social policies even if you think differences in social status are 100 percent genetic. The policies would be, quite literally, a matter of justice, not a consequence of the indistinguishability of individuals.
Indeed, the existence of innate differences in ability makes Rawls’s conception of social justice especially acute and eternally relevant. If we were blank slates, and if a society ever did eliminate discrimination, the poorest could be said to deserve their station because they must have chosen to do less with their standard-issue talents. But if people differ in talents, people might find themselves in poverty in a nonprejudiced society even if they applied themselves to the fullest. That is an injustice that, a Rawlsian would argue, ought to be rectified, and it would be overlooked if we didn’t recognize that people differ in their abilities.
An idea is not false or evil because the Nazis misused it. As the historian Robert Richards wrote of an alleged connection between Nazism and evolutionary biology, “If such vague similarities suffice here, we should all be hustled to the gallows.” Indeed, if we censored ideas that the Nazis abused, we would have to give up far more than the application of evolution and genetics to human behavior. We would have to censor the study of evolution and genetics, period. And we would have to suppress many other ideas that Hitler twisted into the foundations of Nazism:
The germ theory of disease: The Nazis repeatedly cited Pasteur and Koch to argue that the Jews were like an infectious bacillus that had to be eradicated to control a contagious disease.
Romanticism, environmentalism, and the love of nature: The Nazis amplified a Romantic strain in German culture that believed the Volk were a people of destiny with a mystical bond to nature and the land. The Jews and other minorities, in contrast, took root in the degenerate cities.
Philology and linguistics: The concept of the Aryan race was based on a prehistoric tribe posited by linguists, the Indo-Europeans, who were thought to have spilled out of an ancient homeland thousands of years ago and to have conquered much of Europe and Asia.
Religious belief: Though Hitler disliked Christianity, he was not an atheist, and was emboldened by the conviction that he was carrying out a divinely ordained plan.
The danger that we might distort our own science as a reaction to the Nazis’ distortions is not hypothetical. The historian of science Robert Proctor has shown that American public health officials were slow to acknowledge that smoking causes cancer because it was the Nazis who had originally established the link. And some German scientists argue that biomedical research has been crippled in their country because of vague lingering associations to Nazism.
Hitler was evil because he caused the deaths of thirty million people and inconceivable suffering to countless others, not because his beliefs made reference to biology (or linguistics or nature or smoking or God). Smearing the guilt from his actions to every conceivable aspect of his factual beliefs can only backfire. Ideas are connected to other ideas, and should any of Hitler’s turn out to have some grain of truth—if races, for example, turn out to have any biological reality, or if the Indo-Europeans really were a conquering tribe—we would not want to concede that Nazism wasn’t so wrong after all.
Defenders of the naturalistic and moralistic fallacies are not made of straw but include prominent scholars and writers. For example, in response to Thornhill’s earlier writings on rape, the feminist scholar Susan Brownmiller wrote, “It seems quite clear that the biologicization of rape and the dismissal of social or ‘moral’ factors will . . . tend to legitimate rape. . . . It is reductive and reactionary to isolate rape from other forms of violent antisocial behavior and dignify it with adaptive significance.” Note the fallacy: if something is explained with biology, it has been “legitimated”; if something is shown to be adaptive, it has been “dignified.” Similarly, Stephen Jay Gould wrote of another discussion of rape in animals, “By falsely describing an inherited behavior in birds with an old name for a deviant human action, we subtly suggest that true rape—our own kind—might be a natural behavior with Darwinian advantages to certain people as well.” The implicit rebuke is that to describe an act as “natural” or as having “Darwinian advantages” is somehow to condone it.
Suppose rape is rooted in a feature of human nature, such as that men want sex across a wider range of circumstances than women do. It is also a feature of human nature, just as deeply rooted in our evolution, that women want control over when and with whom they have sex. It is inherent to our value system that the interests of women should not be subordinated to those of men, and that control over one’s body is a fundamental right that trumps other people’s desires. So rape is not tolerated, regardless of any possible connection to the nature of men’s sexuality. Note how this calculus requires a “deterministic” and “essentialist” claim about human nature: that women abhor being raped. Without that claim we would have no way to choose between trying to deter rape and trying to socialize women to accept it, which would be perfectly compatible with the supposedly progressive doctrine that we are malleable raw material.
Inborn human desires are a nuisance to those with utopian and totalitarian visions, which often amount to the same thing. What stands in the way of most utopias is not pestilence and drought but human behavior. So utopians have to think of ways to control behavior, and when propaganda doesn’t do the trick, more emphatic techniques are tried. The Marxist utopians of the twentieth century, as we saw, needed a tabula rasa free of selfishness and family ties and used totalitarian measures to scrape the tablets clean or start over with new ones. As Bertolt Brecht said of the East German government, “If the people did not do better the government would dismiss the people and elect a new one.” Political philosophers and historians who have recently “reflected on our ravaged century,” such as Isaiah Berlin, Kenneth Minogue, Robert Conquest, Jonathan Glover, James Scott, and Daniel Chirot, have pointed to utopian dreams as a major cause of twentieth-century nightmares. For that matter, Wordsworth’s revolutionary France, “thrilled with joy” while human nature was “born again,” turned out to be no picnic either.
Feminism, far from needing a blank slate, needs the opposite, a clear conception of human nature. One of the most pressing feminist causes today is the condition of women in the developing world. In many places female fetuses are selectively aborted, newborn girls are killed, daughters are malnourished and kept from school, adolescent girls have their genitals cut out, young women are cloaked from head to toe, adulteresses are stoned to death, and widows are expected to fall onto their husbands’ funeral pyres. The relativist climate in many academic circles does not allow these horrors to be criticized because they are practices of other cultures, and cultures are superorganisms that, like people, have inalienable rights. To escape this trap, the feminist philosopher Martha Nussbaum has invoked “central functional capabilities” that all humans have a right to exercise, such as physical integrity, liberty of conscience, and political participation. She has been criticized in turn for taking on a colonial “civilizing mission” or “white woman’s burden,” in which arrogant Europeans would instruct the poor people of the world in what they want. But Nussbaum’s moral argument is defensible if her “capabilities” are grounded, directly or indirectly, in a universal human nature. Human nature provides a yardstick to identify suffering in any member of our species.
The existence of a human nature is not a reactionary doctrine that dooms us to eternal oppression, violence, and greed. Of course we should try to reduce harmful behavior, just as we try to reduce afflictions like hunger, disease, and the elements. But we fight those afflictions not by denying the pesky facts of nature but by turning some of them against the others. For efforts at social change to be effective, they must identify the cognitive and moral resources that make some kinds of change possible. And for the efforts to be humane, they must acknowledge the universal pleasures and pains that make some kinds of change desirable.
Even worse, biology may show that we are all blameless. Evolutionary theory says that the ultimate rationale for our motives is that they perpetuated our ancestors’ genes in the environment in which we evolved. Since none of us are aware of that rationale, none of us can be blamed for pursuing it, any more than we blame the mental patient who thinks he is subduing a mad dog but really is attacking a nurse. We scratch our heads when we learn of ancient customs that punished the soulless: the Hebrew rule of stoning an ox to death if it killed a man, the Athenian practice of putting an ax on trial if it injured a man (and hurling it over the city wall if found guilty), a medieval French case in which a sow was sentenced to be mangled for having mauled a child, and the whipping and burial of a church bell in 1685 for having assisted French heretics. But evolutionary biologists insist we are not fundamentally different from animals, and molecular geneticists and neuroscientists insist we are not fundamentally different from inanimate matter. If people are soulless, why is it not just as silly to punish people? Shouldn’t we heed the creationists, who say that if you teach children they are animals they will behave like animals? Should we go even farther than the National Rifle Association bumper sticker—GUNS DON’T KILL; PEOPLE KILL—and say that not even people kill, because people are just as mechanical as guns?
These concerns are by no means academic. Cognitive neuroscientists are sometimes approached by criminal defense lawyers hoping that a wayward pixel on a brain scan might exonerate their client (a scenario that is wittily played out in Richard Dooling’s novel Brain Storm). When a team of geneticists found a rare gene that predisposed the men in one family to violent outbursts, a lawyer for an unrelated murder defendant argued that his client might have such a gene too. If so, the lawyer argued, “his actions may not have been a product of total free will.” When Randy Thornhill and Craig Palmer argued that rape is a consequence of male reproductive strategies, another lawyer contemplated using their theory to defend rape suspects. (Insert your favorite lawyer joke here.) Biologically sophisticated legal scholars, such as Owen Jones, have argued that a “rape gene” defense would almost certainly fail, but the general threat remains that biological explanations will be used to exonerate wrongdoers. Is this the bright future promised by the sciences of human nature—it wasn’t me, it was my amygdala? Darwin made me do it? The genes ate my homework?
Few people today argue that criminal punishment is obsolete, even if they recognize that (other than incapacitating some habitual criminals) it is pointless in the short run. That is because if we ever did calculate the short-term effects in deciding whether to punish, potential wrongdoers could anticipate that calculation and factor it into their behavior. They could predict that we would not find it worthwhile to punish them once it was too late to prevent the crime, and could act with impunity, calling our bluff. The only solution is to adopt a resolute policy of punishing wrongdoers regardless of the immediate effects. If one is genuinely not bluffing about the threat of punishment, there is no bluff to call. As Oliver Wendell Holmes explained, “If I were having a philosophical talk with a man I was going to have hanged (or electrocuted) I should say, ‘I don’t doubt that your act was inevitable for you but to make it more avoidable by others we propose to sacrifice you to the common good. You may regard yourself as a soldier dying for your country if you like. But the law must keep its promises’” This promise-keeping underlies the policy of applying justice “as a matter of principle,” regardless of the immediate costs or even of consistency with common sense. If a death-row inmate attempts suicide, we speed him to the emergency ward, struggle to resuscitate him, give him the best modern medicine to help him recuperate, and kill him. We do it as part of a policy that closes off all possibilities to “cheat justice.”
Capital punishment is a vivid illustration of the paradoxical logic of deterrence, but the logic applies to lesser criminal punishments, to personal acts of revenge, and to intangible social penalties like ostracism and scorn. Evolutionary psychologists and game theorists have argued that the deterrence paradox led to the evolution of the emotions that undergird a desire for justice: the implacable need for retribution, the burning feeling that an evil act knocks the universe out of balance and can be canceled only by a commensurate punishment. People who are emotionally driven to retaliate against those who cross them, even at a cost to themselves, are more credible adversaries and less likely to be exploited. Many judicial theorists argue that criminal law is simply a controlled implementation of the human desire for retribution, designed to keep it from escalating into cycles of vendetta. The Victorian jurist James Stephen said that “the criminal law bears the same relation to the urge for revenge as marriage does to the sexual urge.”
Religious conceptions of sin and responsibility simply extend this lever by implying that any wrongdoing that is undiscovered or unpunished by one’s fellows will be discovered and punished by God. Martin Daly and Margo Wilson sum up the ultimate rationale of our intuitions about responsibility and godly retribution:
From the perspective of evolutionary psychology, this almost mystical and seemingly irreducible sort of moral imperative is the output of a mental mechanism with a straightforward adaptive function: to reckon justice and administer punishment by a calculus which ensures that violators reap no advantage from their misdeeds. The enormous volume of mystico-religious bafflegab about atonement and penance and divine justice and the like is the attribution to higher, detached authority of what is actually a mundane, pragmatic matter: discouraging self-interested competitive acts by reducing their profitability to nil.
The insanity defense achieved its present notoriety, with dueling rent-a-shrinks and ingenious abuse excuses, when it was expanded from a practical test of whether the cognitive system responding to deterrence is working to the more nebulous tests of what can be said to have produced the behavior. In the 1954 Durham decision, Bazelon invoked “the science of psychiatry” and “the science of psychology” to create a new basis for the insanity defense:
The rule we now hold is simply that an accused is not criminally responsible if his unlawful act was the product of mental disease or mental defect.
Unless one believes that ordinary acts are chosen by a ghost in the machine, all acts are products of cognitive and emotional systems in the brain. Criminal acts are relatively rare—if everyone in a defendant’s shoes acted as he did, the law against what he did would be repealed—so heinous acts will often be products of a brain system that is in some way different from the norm, and the behavior can be construed as “a product of mental disease or mental defect.” The Durham decision and similar insanity rules, by distinguishing behavior that is a product of a brain condition from behavior that is something else, threatens to turn every advance in our understanding of the mind into an erosion of responsibility.
Now, some discoveries about the mind and brain really could have an impact on our attitudes toward responsibility—but they may call for expanding the domain of responsibility, not contracting it. Suppose desires that sometimes culminate in the harassment and battering of women are present in many men. Does that really mean that men should be punished more leniently for such crimes, because they can’t help it? Or does it mean they should be punished more surely and severely, because that is the best way to counteract a strong or widespread urge? Suppose a vicious psychopath is found to have a defective sense of sympathy, which makes it harder for him to appreciate the suffering of his victims. Should we mitigate the punishment because he has diminished capacity? Or should we make the punishment more sure and severe to teach him a lesson in the only language he understands?
Finally, the doctrine of a soul that outlives the body is anything but righteous, because it necessarily devalues the lives we live on this earth. When Susan Smith sent her two young sons to the bottom of a lake, she eased her conscience with the rationalization that “my children deserve to have the best, and now they will.” Allusions to a happy afterlife are typical in the final letters of parents who take their children’s lives before taking their own, and we have recently been reminded of how such beliefs embolden suicide bombers and kamikaze hijackers. This is why we should reject the argument that if people stopped believing in divine retribution they would do evil with impunity. Yes, if nonbelievers thought they could elude the legal system, the opprobrium of their communities, and their own consciences, they would not be deterred by the threat of spending eternity in hell. But they would also not be tempted to massacre thousands of people by the promise of spending eternity in heaven.
Even the emotional comfort of a belief in an afterlife can go both ways. Would life lose its purpose if we ceased to exist when our brains die? On the contrary, nothing invests life with more meaning than the realization that every moment of sentience is a precious gift. How many fights have been averted, how many friendships renewed, how many hours not squandered, how many gestures of affection offered, because we sometimes remind ourselves that “life is short”?
Perhaps the same argument can be made for morality. According to the theory of moral realism, right and wrong exist, and have an inherent logic that licenses some moral arguments and not others. The world presents us with non-zero-sum games in which it is better for both parties to act unselfishly than for both to act selfishly (better not to shove and not to be shoved than to shove and be shoved). Given the goal of being better off, certain conditions follow necessarily. No creature equipped with circuitry to understand that it is immoral for you to hurt me could discover anything but that it is immoral for me to hurt you. As with numbers and the number sense, we would expect moral systems to evolve toward similar conclusions in different cultures or even different planets. And in fact the Golden Rule has been rediscovered many times: by the authors of Leviticus and the Mahabharata; by Hillel, Jesus, and Confucius; by the Stoic philosophers of the Roman Empire; by social contract theorists such as Hobbes, Rousseau, and Locke; and by moral philosophers such as Kant in his categorical imperative. Our moral sense may have evolved to mesh with an intrinsic logic of ethics rather than concocting it in our heads out of nothing.
But even if the Platonic existence of moral logic is too rich for your blood, you can still see morality as something more than a social convention or religious dogma. Whatever its ontological status may be, a moral sense is part of the standard equipment of the human mind. It’s the only mind we’ve got, and we have no choice but to take its intuitions seriously. If we are so constituted that we cannot help but think in moral terms (at least some of the time and toward some people), then morality is as real for us as if it were decreed by the Almighty or written into the cosmos. And so it is with other human values like love, truth, and beauty. Could we ever know whether they are really “out there” or whether we just think they are out there because the human brain makes it impossible not to think they are out there? And how bad would it be if they were inherent to the human way of thinking? Perhaps we should reflect on our condition as Kant did in his Critique of Practical Reason: “Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them: the starry heavens above and the moral law within.”
The idea that stereotypes are inherently irrational owes more to a condescension toward ordinary people than it does to good psychological research. Many researchers, having shown that stereotypes existed in the minds of their subjects, assumed that the stereotypes had to be irrational, because they were uncomfortable with the possibility that some trait might be statistically true of some group. They never actually checked. That began to change in the 1980s, and now a fair amount is known about the accuracy of stereotypes.
With some important exceptions, stereotypes are in fact not inaccurate when assessed against objective benchmarks such as census figures or the reports of the stereotyped people themselves. People who believe that African Americans are more likely to be on welfare than whites, that Jews have higher average incomes than WASPs, that business students are more conservative than students in the arts, that women are more likely than men to want to lose weight, and that men are more likely than women to swat a fly with their bare hands, are not being irrational or bigoted. Those beliefs are correct. People’s stereotypes are generally consistent with the statistics, and in many cases their bias is to underestimate the real differences between sexes or ethnic groups. This does not mean that the stereotyped traits are unchangeable, of course, or that people think they are unchangeable, only that people perceive the traits fairly accurately at the time.
Moreover, even when people believe that ethnic groups have characteristic traits, they are never mindless stereotypers who literally believe that each and every member of the group possesses those traits. People may think that Germans are, on average, more efficient than non-Germans, but no one believes that every last German is more efficient than every non-German. And people have no trouble overriding a stereotype when they have good information about an individual. Contrary to a common accusation, teachers’ impressions of their individual pupils are not contaminated by their stereotypes of race, gender, or socioeconomic status. The teachers’ impressions accurately reflect the pupil’s performance as measured by objective tests.
What about policies that go farther and actively compensate for prejudicial stereotypes, such as quotas and preferences that favor underrepresented groups? Some defenders of these policies assume that gatekeepers are incurably afflicted with baseless prejudices, and that quotas must be kept in place forever to neutralize their effects. The research on stereotype accuracy refutes that argument. Nonetheless, the research might support a different argument for preferences and other gender- and color-sensitive policies. Stereotypes, even when they are accurate, might be self-fulfilling, and not just in the obvious case of institutionalized barriers like those that kept women and African Americans out of universities and professions. Many people have heard of the Pygmalion effect, in which people perform as other people (such as teachers) expect them to perform. As it happens, the Pygmalion effect appears to be small or nonexistent, but there are more subtle forms of self-fulfilling prophecies. If subjective decisions about people, such as admissions, hiring, credit, and salaries, are based in part on group-wide averages, they will conspire to make the rich richer and the poor poorer. Women are marginalized in academia, making them genuinely less influential, which increases their marginalization. African Americans are treated as poorer credit risks and denied credit, which makes them less likely to succeed, which makes them poorer credit risks. Race- and gender-sensitive policies, according to arguments by the psychologist Virginia Valian, the economist Glenn Loury, and the philosopher James Flynn, may be needed to break the vicious cycle.
Pushing in the other direction is the finding that stereotypes are least accurate when they pertain to a coalition that is pitted against one’s own in hostile competition. This should make us nervous about identity politics, in which public institutions identify their members in terms of their race, gender, and ethnic group and weigh every policy by how it favors one group over another. In many universities, for example, minority students are earmarked for special orientation sessions and encouraged to view their entire academic experience through the lens of their group and how it has been victimized. By implicitly pitting one group against another, such policies may cause each group to brew stereotypes about the other that are more pejorative than the ones they would develop in personal encounters. As with other policy issues I examine in this book, the data from the lab do not offer a thumbs-up or thumbs-down verdict on race- and gender-conscious policies. But by highlighting the features of our psychology that different policies engage, the findings can make the tradeoffs clearer and the debates better informed.
As with so many ideas in social science, the centrality of language is taken to extremes in deconstructionism, postmodernism, and other relativist doctrines. The writings of oracles like Jacques Derrida are studded with such aphorisms as “No escape from language is possible,” “Text is self-referential,” “Language is power,” and “There is nothing outside the text.” Similarly, J. Hillis Miller wrote that “language is not an instrument or tool in man’s hands, a submissive means of thinking. Language rather thinks man and his ‘world’ . . . if he will allow it to do so.” The prize for the most extreme statement must go to Roland Barthes, who declared, “Man does not exist prior to language, either as a species or as an individual.”
The ancestry of these ideas is said to be from linguistics, though most linguists believe that deconstructionists have gone off the deep end. The original observation was that many words are defined in part by their relationship to other words. For example, he is defined by its contrast with I, you, they, and she, and big makes sense only as the opposite of little. And if you look up words in a dictionary, they are defined by other words, which are defined by still other words, until the circle is completed when you get back to a definition containing the original word. Therefore, say the deconstructionists, language is a self-contained system in which words have no necessary connection to reality. And since language is an arbitrary instrument, not a medium for communicating thoughts or describing reality, the powerful can use it to manipulate and oppress others. This leads in turn to an agitation for linguistic reforms: neologisms like co or na that would serve as gender-neutral pronouns, a succession of new terms for racial minorities, and a rejection of standards of clarity in criticism and scholarship (for if language is no longer a window onto thought but the very stuff of thought, the metaphor of “clarity” no longer applies).
Language conveys not just literal meanings but also a speaker’s attitude. Think of the difference between fat and voluptuous, slender and scrawny, thrifty and stingy, articulate and slick. Racial epithets, which are laced with contempt, are justifiably off-limits among responsible people, because using them conveys the tacit message that contempt for the people referred to by the epithet is acceptable. But the drive to adopt new terms for disadvantaged groups goes much further than this basic sign of respect; it often assumes that words and attitudes are so inseparable that one can reengineer people’s attitudes by tinkering with the words. In 1994 the Los Angeles Times adopted a style sheet that banned some 150 words, including birth defect, Canuck, Chinese fire drill, dark continent, divorcée, Dutch treat, handicapped, illegitimate, invalid, manmade, New World, stepchild, and to welsh. The editors assumed that words register in the brain with their literal meanings, so that an invalid is understood as “someone who is not valid” and Dutch treat is understood as a slur on contemporary Netherlanders. (In fact, it is one of many idioms in which Dutch means “ersatz,” such as Dutch oven, Dutch door, Dutch uncle, Dutch courage, and Dutch auction, the remnants of a long-forgotten rivalry between the English and the Dutch.)
But even the more reasonable attempts at linguistic reform are based on a dubious theory of linguistic determinism. Many people are puzzled by the replacement of formerly unexceptionable terms by new ones: Negro by black by African American, Spanish-American by Hispanic by Latino, crippled by handicapped by disabled by challenged, slum by ghetto by inner city by (according to the Times) slum once again. Occasionally the neologisms are defended with some rationale about their meaning. In the 1960s, the word Negro was replaced by the word black, because the parallel between the words black and white was meant to underscore the equality of the races. Similarly, Native American reminds us of who was here first and avoids the geographically inaccurate term Indian. But often the new terms replace ones that were perfectly congenial in their day, as we see in names for old institutions that are obviously sympathetic to the people being named: the United Negro College Fund, the National Association for the Advancement of Colored People, the Shriners Hospitals for Crippled Children. And sometimes a term can be tainted or unfashionable while a minor variant is fine: consider colored people versus people of color, Afro-American versus African American, Negro—Spanish for “black”—versus black. If anything, a respect for literal meaning should send us off looking for a new word for the descendants of Europeans, who are neither white nor Caucasian. Something else must be driving the replacement process.
Linguists are familiar with the phenomenon, which may be called the euphemism treadmill. People invent new words for emotionally charged referents, but soon the euphemism becomes tainted by association, and a new word must be found, which soon acquires its own connotations, and so on. Water closet becomes toilet (originally a term for any kind of body care, as in toilet kit and toilet water), which becomes bathroom, which becomes restroom, which becomes lavatory. Undertaker changes to mortician, which changes to funeral director. Garbage collection turns into sanitation, which turns into environmental services. Gym (from gymnasium, originally “high school”) becomes physical education, which becomes (at Berkeley) human biodynamics. Even the word minority—the most neutral label conceivable, referring only to relative numbers—was banned in 2001 by the San Diego City Council (and nearly banned by the Boston City Council) because it was deemed disparaging to non-whites. “No matter how you slice it, minority means less than,” said a semantically challenged official at Boston College, where the preferred term is AHANA (an acronym for African-American, Hispanic, Asian, and Native American). The euphemism treadmill shows that concepts, not words, are primary in people’s minds. Give a concept a new name, and the name becomes colored by the concept; the concept does not become freshened by the name, at least not for long. Names for minorities will continue to change as long as people have negative attitudes toward them. We will know that we have achieved mutual respect when the names stay put.
THE LAYPERSON’S INTUITIVE psychology or “theory of mind” is one of the brain’s most striking abilities. We do not treat other people as wind-up dolls but think of them as being animated by minds: nonphysical entities we cannot see or touch but that are as real to us as bodies and objects. Aside from allowing us to predict people’s behavior from their beliefs and desires, our theory of mind is tied to our ability to empathize and to our conception of life and death. The difference between a dead body and a living one is that a dead body no longer contains the vital force we call a mind. Our theory of mind is the source of the concept of the soul. The ghost in the machine is deeply rooted in our way of thinking about people.
A belief in the soul, in turn, meshes with our moral convictions. The core of morality is the recognition that others have interests as we do—that they “feel want, taste grief, need friends,” as Shakespeare put it—and therefore that they have a right to life, liberty, and the pursuit of their interests. But who are those “others”? We need a boundary that allows us to be callous to rocks and plants but forces us to treat other humans as “persons” that possess inalienable rights. Otherwise, it seems, we would place ourselves on a slippery slope that ends in the disposal of inconvenient people or in grotesque deliberations on the value of individual lives. As Pope John Paul II pointed out, the notion that every human carries infinite value by virtue of possessing a soul would seem to give us that boundary.
Some moral philosophers try to thread a boundary across this treacherous landscape by equating personhood with cognitive traits that humans happen to possess. These include an ability to reflect upon oneself as a continuous locus of consciousness, to form and savor plans for the future, to dread death, and to express a choice not to die. At first glance the boundary is appealing because it puts humans on one side and animals and conceptuses on the other. But it also implies that nothing is wrong with killing unwanted newborns, the senile, and the mentally handicapped, who lack the qualifying traits. Almost no one is willing to accept a criterion with those implications.
There is no solution to these dilemmas, because they arise out of a fundamental incommensurability: between our intuitive psychology, with its all-or-none concept of a person or soul, and the brute facts of biology, which tell us that the human brain evolved gradually, develops gradually, and can die gradually. And that means that moral conundrums such as abortion, euthanasia, and animal rights will never be resolved in a decisive and intuitively satisfying way. This does not mean that no policy is defensible and that the whole matter should be left to personal taste, political power, or religious dogma. As the bioethicist Ronald Green has pointed out, it just means we have to reconceptualize the problem: from finding a boundary in nature to choosing a boundary that best trades off the conflicting goods and evils for each policy dilemma. We should make decisions in each case that can be practically implemented, that maximize happiness, and that minimize current and future suffering. Many of our current policies are already compromises of this sort: research on animals is permitted but regulated; a late-term fetus is not awarded full legal status as a person but may not be aborted unless it is necessary to protect the mother’s life or health. Green notes that the shift from finding boundaries to choosing boundaries is a conceptual revolution of Copernican proportions. But the old conceptualization, which amounts to trying to pinpoint when the ghost enters the machine, is scientifically untenable and has no business guiding policy in the twenty-first century.
The traditional argument against pragmatic, case-by-case decisions is that they lead to slippery slopes. If we allow abortion, we will soon allow infanticide; if we permit research on stem cells, we will bring on a Brave New World of government-engineered humans. But here, I think, the nature of human cognition can get us out of the dilemma rather than pushing us into one. A slippery slope assumes that conceptual categories must have crisp boundaries that allow in-or-out decisions, or else anything goes. But that is not how human concepts work. As we have seen, many everyday concepts have fuzzy boundaries, and the mind distinguishes between a fuzzy boundary and no boundary at all. “Adult” and “child” are fuzzy categories, which is why we could raise the drinking age to twenty-one or lower the voting age to eighteen. But that did not put us on a slippery slope in which we eventually raised the drinking age to fifty or lowered the voting age to five. Those policies really would violate our concepts of “child” and “adult,” fuzzy though their boundaries may be. In the same way, we can bring our concepts of life and mind into register with biological reality without necessarily slipping down a slope.
A 2001 report by the European Union reviewed eighty-one research projects conducted over fifteen years and failed to find any new risks to human health or to the environment posed by genetically modified crops. This is no surprise to a biologist. Genetically modified foods are no more dangerous than “natural” foods because they are not fundamentally different from natural foods. Virtually every animal and vegetable sold in a health-food store has been “genetically modified” for millennia by selective breeding and hybridization. The wild ancestor of carrots was a thin, bitter white root; the ancestor of corn had an inch-long, easily shattered cob with a few small, rock-hard kernels. Plants are Darwinian creatures with no particular desire to be eaten, so they did not go out of their way to be tasty, healthy, or easy for us to grow and harvest. On the contrary: they did go out of their way to deter us from eating them, by evolving irritants, toxins, and bitter-tasting compounds. So there is nothing especially safe about natural foods. The “natural” method of selective breeding for pest resistance simply increases the concentration of the plant’s own poisons; one variety of natural potato had to be withdrawn from the market because it proved to be toxic to people. Similarly, natural flavors—defined by one food scientist as “a flavor that’s been derived with an out-of-date technology”—are often chemically indistinguishable from their artificial counterparts, and when they are distinguishable, sometimes the natural flavor is the more dangerous one. When “natural” almond flavor, benzaldehyde, is derived from peach pits, it is accompanied by traces of cyanide; when it is synthesized as an “artificial flavor,” it is not.
A blanket fear of all artificial and genetically modified foods is patently irrational on health grounds, and it could make food more expensive and hence less available to the poor. Where do these specious fears come from? Partly they arise from the carcinogen-du-jour school of journalism that uncritically reports any study showing elevated cancer rates in rats fed megadoses of chemicals. But partly they come from an intuition about living things that was first identified by the anthropologist James George Frazer in 1890 and has recently been studied in the lab by Paul Rozin, Susan Gelman, Frank Keil, Scott Atran, and other cognitive scientists.
People’s intuitive biology begins with the concept of an invisible essence residing in living things, which gives them their form and powers. These essentialist beliefs emerge early in childhood, and in traditional cultures they dominate reasoning about plants and animals. Often the intuitions serve people well. They allow preschoolers to deduce that a raccoon that looks like a skunk will have raccoon babies, that a seed taken from an apple and planted with flowers in a pot will produce an apple tree, and that an animal’s behavior depends on its innards, not on its appearance. They allow traditional peoples to deduce that different-looking creatures (such as a caterpillar and a butterfly) can belong to the same kind, and they impel them to extract juices and powders from living things and try them as medicines, poisons, and food supplements. They can prevent people from sickening themselves by eating things that have been in contact with infectious substances such as feces, sick people, and rotting meat.
But intuitive essentialism can also lead people into error. Children falsely believe that a child of English-speaking parents will speak English even if brought up in a French-speaking family, and that boys will have short hair and girls will wear dresses even if they are brought up with no other member of their sex from which they can learn those habits. Traditional peoples believe in sympathetic magic, otherwise known as voodoo. They think similar-looking objects have similar powers, so that a ground-up rhinoceros horn is a cure for erectile dysfunction. And they think that animal parts can transmit their powers to anything they mingle with, so that eating or wearing a part of a fierce animal will make one fierce.
The mind is more comfortable in reckoning probabilities in terms of the relative frequency of remembered or imagined events. That can make recent and memorable events—a plane crash, a shark attack, an anthrax infection—loom larger in one’s worry list than more frequent and boring events, such as the car crashes and ladder falls that get printed beneath the fold on page B14. And it can lead risk experts to speak one language and ordinary people to hear another. In hearings for a proposed nuclear waste site, an expert might present a fault tree that lays out the conceivable sequences of events by which radioactivity might escape. For example, erosion, cracks in the bedrock, accidental drilling, or improper sealing might cause the release of radioactivity into groundwater. In turn, groundwater movement, volcanic activity, or an impact of a large meteorite might cause the release of radioactive wastes into the biosphere. Each train of events can be assigned a probability, and the aggregate probability of an accident from all the causes can be estimated. When people hear these analyses, however, they are not reassured but become more fearful than ever—they hadn’t realized there are so many ways for something to go wrong! They mentally tabulate the number of disaster scenarios, rather than mentally aggregating the probabilities of the disaster scenarios.
None of this implies that people are dunces or that “experts” should ram unwanted technologies down their throats. Even with a complete understanding of the risks, reasonable people might choose to forgo certain technological advances. If something is viscerally revolting, a democracy should allow people to reject it whether or not it is “rational” by some criterion that ignores our psychology. Many people would reject vegetables grown in sanitized human waste and would avoid an elevator with a glass floor, not because they believe these things are dangerous but because the thought gives them the willies. If they have the same reaction to eating genetically modified foods or living next to a nuclear power plant, they should have the option of rejecting them, too, as long as they do not try to force their preferences on others or saddle them with the costs.
But because lenders and middlemen do not cause tangible objects to come into being, their contributions are difficult to grasp, and they are often thought of as skimmers and parasites. A recurring event in human history is the outbreak of ghettoization, confiscation, expulsion, and mob violence against middlemen, often ethnic minorities who learned to specialize in the middleman niche. The Jews in Europe are the most familiar example, but the expatriate Chinese, the Lebanese, the Armenians, and the Gujeratis and Chettyars of India have suffered similar histories of persecution.
One economist in an unusual situation showed how the physical fallacy does not depend on any unique historical circumstance but easily arises from human psychology. He watched the entire syndrome emerge before his eyes when he spent time in a World War Il prisoner-of-war camp. Every month the prisoners received identical packages from the Red Cross. A few prisoners circulated through the camp, trading and lending chocolates, cigarettes, and other commodities among prisoners who valued some items more than others or who had used up their own rations before the end of the month. The middlemen made a small profit from each transaction, and as a result they were deeply resented—a microcosm of the tragedy of the middleman minority. The economist wrote: “[The middleman’s] function, and his hard work in bringing buyer and seller together, were ignored; profits were not regarded as a reward for labour, but as the result of sharp practises. Despite the fact that his very existence was proof to the contrary, the middleman was held to be redundant.”
Family love indeed subverts the ideal of what we should feel for every soul in the world. Moral philosophers play with a hypothetical dilemma in which people can run through the left door of a burning building to save some number of children or through the right door to save their own child. If you are a parent, ponder this question: Is there any number of children that would lead you to pick the left door? Indeed, all of us reveal our preference with our pocketbooks when we spend money on trifles for our own children (a bicycle, orthodontics, an education at a private school or university) instead of saving the lives of unrelated children in the developing world by donating the money to charity. Similarly, the practice of parents bequeathing their wealth to their children is one of the steepest impediments to an economically egalitarian society. Yet few people would allow the government to confiscate 100 percent of their estate, because most people see their children as an extension of themselves and thus as the proper beneficiaries of their lifelong striving.
Nepotism is a universal human bent and a universal scourge of large organizations. It is notorious for sapping countries led by hereditary dynasties and for bogging down governments and businesses in the Third World. A recurring historic solution was to give positions of local power to people who had no family ties, such as eunuchs, celibates, slaves, or people a long way from home. A more recent solution is to outlaw or regulate nepotism, though the regulations always come with tradeoffs and exceptions. Small businesses—or, as they are often called, “family businesses” or “Mom-and-Pop businesses”—are highly nepotistic, and thereby can conflict with principles of equal opportunity and earn the resentment of the surrounding community.
B. F. Skinner, ever the Maoist, wrote in the 1970s that people should be rewarded for eating in large communal dining halls rather than at home with their families, because large pots have a lower ratio of surface area to volume than small pots and hence are more energy efficient. The logic is impeccable, but this mindset collided with human nature many times in the twentieth century—horrifically in the forced collectivizations in the Soviet Union and China, and benignly in the Israeli kibbutzim, which quickly abandoned their policy of rearing children separately from their parents. A character in a novel by the Israeli writer Batya Gur captures the kind of sentiment that led to this change: “I want to tuck in my children at night myself. . . and when they have a nightmare I want them to come to my bed, not to some intercom, and not to make them go out at night in the dark looking for our room, stumbling over stones, thinking that every shadow is a monster, and in the end standing in front of a closed door or being dragged back to the children’s house.”
The ethic of community also includes a deference to an established hierarchy, and the mind (including the Western mind) all too easily conflates prestige with morality. We see it in words that implicitly equate status with virtue—chivalrous, classy, gentlemanly, honorable, noble—and low rank with sin—low-class, low-rent, mean, nasty, shabby, shoddy, villain (originally meaning “peasant”), vulgar. The Myth of the Noble Noble is obvious in contemporary celebrity worship. Members of the royalty like Princess Diana and her American equivalent, John F. Kennedy Jr., are awarded the trappings of sainthood even though they were morally unexceptional people (yes, Diana supported charities, but that’s pretty much the job description of a princess in this day and age). Their good looks brighten their halos even more, because people judge attractive men and women to be more virtuous. Prince Charles, who also supports charities, will never be awarded the trappings of sainthood, even if he dies a tragic death.
People also confuse morality with purity, even in the secular West. Remember from Chapter 1 that many words for cleanliness and dirt are also words for virtue and sin (pure, unblemished, tainted, and so on). Haidt’s subjects seem to have conflated contamination with sin when they condemned eating a dog, having sex with a dead chicken, and enjoying consensual incest (which reflects our instinctive repulsion toward sex with siblings, an emotion that evolved to deter inbreeding).
The mental mix-up of the good and the clean can have ugly consequences. Racism and sexism are often expressed as a desire to avoid pollutants, as in the ostracism of the “untouchable” caste in India, the sequestering of menstruating women in Orthodox Judaism, the fear of contracting AIDS from casual contact with gay men, the segregated facilities for eating, drinking, bathing, and sleeping under the Jim Crow and apartheid policies, and the “racial hygiene” laws in Nazi Germany. One of the haunting questions of twentieth-century history is how so many ordinary people committed wartime atrocities. The philosopher Jonathan Glover has documented that a common denominator is degradation: a diminution of the victim’s status or cleanliness or both. When someone strips a person of dignity by making jokes about his suffering, giving him a humiliating appearance (a dunce cap, awkward prison garb, a crudely shaved head), or forcing him to live in filthy conditions, ordinary people’s compassion can evaporate and they find it easy to treat him like an animal or object.
To the cultural right, all this shows that morality has been under assault from the cultural elite, as we see in the sect that calls itself the Moral Majority. To the left, it shows that the desire to stigmatize private behavior is archaic and repressive, as in H. L Mencken’s definition of Puritanism as “the haunting fear that someone, somewhere, may be happy.” Both sides are wrong. As if to compensate for all the behaviors that have been amoralized in recent decades, we are in the midst of a campaign to moralize new ones. The Babbitts and the bluenoses have been replaced by the activists for a nanny state and the college towns with a foreign policy, but the psychology of moralization is the same. Here are some examples of things that have acquired a moral coloring only recently:
advertising to children • automobile safety • Barbie dolls • “big box” chain stores • cheesecake photos • clothing from Third World factories • consumer product safety • corporate-owned farms • defense-funded research • disposable diapers • disposable packaging • ethnic jokes • executive salaries • fast food • flirtation in the workplace • food additives • fur • hydroelectric dams • IQ tests • logging • mining • nuclear power • oil drilling • owning certain stocks • poultry farms • public holidays (Columbus Day, Martin Luther King Day) • research on AIDS • research on breast cancer • spanking • suburbia (“sprawl”) • sugar • tax cuts • toy guns • violence on television • weight of fashion models
Many of these things can have harmful consequences, of course, and no one would want them trivialized. The question is whether they are best handled by the psychology of moralization (with its search for villains, elevation of accusers, and mobilization of authority to mete out punishment) or in terms of costs and benefits, prudence and risk, or good and bad taste. Pollution, for example, is often treated as a crime of defiling the sacred, as in the song by the rock group Traffic: “Why don’t we . . . try to save this land, and make a promise not to hurt again this holy ground.” This can be contrasted with the attitude of economists like Robert Frank, who (alluding to the costs of cleanups) said, “There is an optimal amount of pollution in the environment, just as there is an optimal amount of dirt in your house.”
Moreover, all human activities have consequences, often with various degrees of benefit and harm to different parties, but not all of them are conceived as immoral. We don’t show contempt to the man who fails to change the batteries in his smoke alarms, takes his family on a driving vacation (multiplying their risk of accidental death), or moves to a rural area (increasing pollution and fuel use in commuting and shopping). Driving a gas-guzzling SUV is seen as morally dubious, but driving a gas-guzzling Volvo is not; eating a Big Mac is suspect, but eating imported cheese or tiramisü is not. Becoming aware of the psychology of moralization need not make us morally obtuse. On the contrary, it can alert us to the possibility that a decision to treat an act in terms of virtue and sin as opposed to cost and benefit has been made on morally irrelevant grounds—in particular, whether the saints and sinners would be in one’s own coalition or someone else’s. Much of what is today called “social criticism” consists of members of the upper classes denouncing the tastes of the lower classes (bawdy entertainment, fast food, plentiful consumer goods) while considering themselves egalitarians.
Tetlock points out that it is in the very nature of our commitments to other people to deny that we can put a price on them: “To transgress these normative boundaries, to attach a monetary value to one’s friendships or one’s children or one’s loyalty to one’s country, is to disqualify oneself from certain societal roles, to demonstrate that one just ‘doesn’t get it’—one does not understand what it means to be a true friend or parent or citizen.” Taboo tradeoffs, which pit a sacred value against a secular one (such as money), are “morally corrosive: the longer one contemplates indecent proposals, the more irreparably one compromises one’s moral identity.”
Unfortunately, a psychology that treats some desiderata as having infinite value can lead to absurdities. Tetlock reviews some examples. The Delaney Clause of the Food and Drug Act of 1958 sought to improve public health by banning all new food additives for which there was any risk of carcinogenicity. That sounded good but wasn’t. The policy left people exposed to more-dangerous food additives that were already on the market, it created an incentive for manufacturers to introduce new dangerous additives as long as they were not carcinogenic, and it outlawed products that could have saved more lives than they put at risk, such as the saccharin used by diabetics. Similarly after the discovery of hazardous waste at the Love Canal in 1978, Congress passed the Superfund Act, which required the complete cleanup of all hazardous waste sites. It turned out to cost millions of dollars to clean up the last 10 percent of the waste at a given site—money that could have been spent on cleaning up other sites or reducing other health risks. So the lavish fund went bankrupt before even a fraction of its sites could be decontaminated, and its effect on Americans’ health was debatable. After the Exxon Valdez oil spill, four-fifths of the respondents in one poll said that the country should pursue greater environmental protection “regardless of cost.” Taken literally, that meant they were prepared to shut down all schools, hospitals, and police and fire stations, stop funding social programs, medical research, foreign aid, and national defense, or raise the income tax rate to 99 percent, if that is what it would have cost to protect the environment.
Tetlock observes that these fiascoes came about because any politician who honestly presented the inexorable tradeoffs would be crucified for violating a taboo. He would be guilty of “tolerating poisons in our food and water,” or worse, “putting a dollar value on human life.” Policy analysts note that we are stuck with wasteful and inegalitarian entitlement programs because any politician who tried to reform them would be committing political suicide. Savvy opponents would frame the reform in the language of taboo: “breaking our faith with the elderly,” “betraying the sacred trust of veterans who risked their lives for their country,” “scrimping on the care and education of the young.”
But there is still much to be wary of in human moralizing: the confusion of morality with status and purity, the temptation to overmoralize matters of judgment and thereby license aggression against those with whom we disagree, the taboos on thinking about unavoidable tradeoffs, and the ubiquitous vice of self-deception, which always manages to put the self on the side of the angels. Hitler was a moralist (indeed, a moral vegetarian) who, by most accounts, was convinced of the rectitude of his cause. As the historian Ian Buruma wrote, “This shows once again that true believers can be more dangerous than cynical operators. The latter might cut a deal; the former have to go to the end—and drag the world down with them.”
My own view is that the new sciences of human nature really do vindicate some version of the Tragic Vision and undermine the Utopian outlook that until recently dominated large segments of intellectual life. The sciences say nothing, of course, about differences in values that are associated with particular right-wing and left-wing positions (such as in the tradeoffs between unemployment and environmental protection, diversity and economic inefficiency, or individual freedom and community cohesion). Nor do they speak directly to policies that are based on a complex mixture of assumptions about the world. But they do speak to the parts of the visions that are general claims about how the mind works. Those claims may be evaluated against the facts, just like any empirical hypothesis. The Utopian vision that human nature might radically change in some imagined society of the remote future is, of course, literally unfalsifiable, but I think that many of the discoveries recounted in preceding chapters make it unlikely. Among them I would include the following:
The primacy of family ties in all human societies and the consequent appeal of nepotism and inheritance.
The limited scope of communal sharing in human groups, the more common ethos of reciprocity, and the resulting phenomena of social loafing and the collapse of contributions to public goods when reciprocity cannot be implemented.
The universality of dominance and violence across human societies (including supposedly peaceable hunter-gatherers) and the existence of genetic and neurological mechanisms that underlie it.
The universality of ethnocentrism and other forms of group-against-group hostility across societies, and the ease with which such hostility can be aroused in people within our own society.
The partial heritability of intelligence, conscientiousness, and antisocial tendencies, implying that some degree of inequality will arise even in perfectly fair economic systems, and that we therefore face an inherent tradeoff between equality and freedom.
The prevalence of defense mechanisms, self-serving biases, and cognitive dissonance reduction, by which people deceive themselves about their autonomy, wisdom, and integrity.
The biases of the human moral sense, including a preference for kin and friends, a susceptibility to a taboo mentality, and a tendency to confuse morality with conformity, rank, cleanliness, and beauty.
It is not just conventional scientific data that tell us the mind is not infinitely malleable. I think it is no coincidence that beliefs that were common among intellectuals in the 1960s—that democracies are obsolete, revolution is desirable, the police and armed forces dispensable, and society designable from the top down—are now rarer. The Tragic Vision and the Utopian Vision inspired historical events whose interpretations are much clearer than they were just a few decades ago. Those events can serve as additional data to test the visions’ claims about human psychology.
The visions contrast most sharply in the political revolutions they spawned. The first revolution with a Utopian Vision was the French Revolution—recall Wordsworth’s description of the times, with “human nature seeming born again.” The revolution overthrew the ancien régime and sought to begin from scratch with the ideals of liberty, equality, and fraternity and a belief that salvation would come from vesting authority in a morally superior breed of leaders. The revolution, of course, sent one leader after another to the guillotine as each failed to measure up to usurpers who felt they had a stronger claim to wisdom and virtue. No political structure survived the turnover of personnel, leaving a vacuum that would be filled by Napoleon. The Russian Revolution was also animated by the Utopian Vision, and it also burned through a succession of leaders before settling into the personality cult of Stalin. The Chinese Revolution, too, put its faith in the benevolence and wisdom of a man who displayed, if anything, a particularly strong dose of human foibles like dominance, lust, and self-deception. The perennial limitations of human nature prove the futility of political revolutions based only on the moral aspirations of the revolutionaries. In the words of the song about revolution by The Who: Meet the new boss; same as the old boss.
The politics of economic inequality ultimately hinge on a tradeoff between economic freedom and economic equality. Though scientists cannot dictate how these desiderata should be weighted, they can help assess the morally relevant costs and thereby enable us to make a more informed decision. Once again the psychology of status and dominance has a role to play in this assessment. In absolute terms, today’s poor are materially better off than the aristocracy of just a century ago. They live longer, are better fed, and enjoy formerly unimaginable luxuries such as central heating, refrigerators, telephones, and round-the-clock entertainment from television and radio. Conservatives say this makes it hard to argue that the station of lower-income people is an ethical outrage that ought to be redressed at any cost.
But if people’s sense of well-being comes from an assessment of their social status, and social status is relative, then extreme inequality can make people on the lower rungs feel defeated even if they are better off than most of humanity. It is not just a matter of hurt feelings: people with lower status are less healthy and die younger, and communities with greater inequality have poorer health and shorter life expectancies. The medical researcher Richard Wilkinson, who documented these patterns, argues that low status triggers an ancient stress reaction that sacrifices tissue repair and immune function for an immediate fight-or-flight response. Wilkinson, together with Martin Daly and Margo Wilson, have pointed to another measurable cost of economic inequality. Crime rates are much higher in regions with greater disparities of wealth (even after controlling for absolute levels of wealth), partly because chronic low status leads men to become obsessed with rank and to kill one another over trivial insults. Wilkinson argues that reducing economic inequality would make millions of lives happier, safer, and longer.
Admittedly, it is easy to equate health and rationality with morality. The metaphors pervade the English language, as when we call an evildoer crazy, degenerate, depraved, deranged, mad, malignant, psycho, sick, or twisted. But the metaphors are bound to mislead us when we contemplate the causes of violence and ways to reduce it. Termites are not malfunctioning when they eat the wooden beams in houses, nor are mosquitoes when they bite a victim and spread the malaria parasite. They are doing exactly what evolution designed them to do, even if the outcome makes people suffer. For scientists to moralize about these creatures or call their behavior pathological would only send us all down blind alleys, such as a search for the “toxic” influences on these creatures or a “cure” that would restore them to health. For the same reason, human violence does not have to be a disease for it to be worth combating. If anything, it is the belief that violence is an aberration that is dangerous, because it lulls us into forgetting how easily violence may erupt in quiescent places.
The Blank Slate and the Noble Savage owe their support not just to their moral appeal but to enforcement by ideology police. The blood libel against Napoleon Chagnon for documenting warfare among the Yanomamö is the most lurid example of the punishment of heretics, but it is not the only one. In 1992 a Violence Initiative in the Alcohol, Drug Abuse, and Mental Health Administration was canceled because of false accusations that the research aimed to sedate inner-city youth and to stigmatize them as genetically prone to violence. (In fact, it advocated the public health approach.) A conference and book on the legal and moral issues surrounding the biology of violence, which was to include advocates of all viewpoints, was canceled by Bernadine Healey, director of the National Institutes of Health, who overruled a unanimous peer-review decision because of concerns “associated with the sensitivity and validity of the proposed conference.” The university sponsoring the conference appealed and won, but when the conference was held three years later, protesters invaded the hall and, as if to provide material for comedians, began a shoving match with the participants.
What was everyone so sensitive about? The stated fear was that the government would define political unrest in response to inequitable social conditions as a psychiatric disease and silence the protesters by drugging them or worse. The radical psychiatrist Peter Breggin called the Violence Initiative “the most terrifying, most racist, most hideous thing imaginable” and “the kind of plan one would associate with Nazi Germany.” The reasons included “the medicalization of social issues, the declaration that the victim of oppression, in this case the Jew, is in fact a genetically and biologically defective person, the mobilization of the state for eugenic purposes and biological purposes, the heavy use of psychiatry in the development of social-control programs.” This is a fanciful, indeed paranoid, reading, but Breggin has tirelessly repeated it, especially to African American politicians and media outlets. Anyone using the words “violence” and “biology” in the same paragraph may be put under a cloud of suspicion for racism, and this has affected the intellectual climate regarding violence. No one has ever gotten into trouble for saying that violence is completely learned.
The generalization that anarchy in the sense of a lack of government leads to anarchy in the sense of violent chaos may seem banal, but it is often overlooked in today's still-romantic climate. Government in general is anathema to many conservatives, and the police and prison system are anathema to many liberals. Many people on the left, citing uncertainty about the deterrent value of capital punishment compared to life imprisonment, maintain that deterrence is not effective in general. And many oppose more effective policing of inner-city neighborhoods, even though it may be the most effective way for their decent inhabitants to abjure the code of the streets. Certainly we must combat the racial inequities that put too many African American men in prison, but as the legal scholar Randall Kennedy has argued, we must also combat the racial inequities that leave too many African Americans exposed to criminals. Many on the right oppose decriminalizing drugs, prostitution, and gambling without factoring in the costs of the zones of anarchy that, by their own free-market logic, are inevitably spawned by prohibition policies. When demand for a commodity is high, suppliers will materialize, and if they cannot protect their property rights by calling the police, they will do so with a violent culture of honor. (This is distinct from the moral argument that our current drug policies incarcerate multitudes of nonviolent people.) Schoolchildren are currently fed the disinformation that Native Americans and other peoples in pre-state societies were inherently peaceable, leaving them uncomprehending, indeed contemptuous, of one of our species’ greatest inventions, democratic government and the rule of law.
Where Hobbes fell short was in dealing with the problem of policing the police. In his view, civil war was such a calamity that any government—monarchy, aristocracy, or democracy—was preferable to it. He did not seem to appreciate that in practice a leviathan would not be an otherworldly sea monster but a human being or group of them, complete with the deadly sins of greed, mistrust, and honor. (As we saw in the preceding chapter, this became the obsession of the heirs of Hobbes who framed the American Constitution.) Armed men are always a menace, so police who are not under tight democratic control can be a far worse calamity than the crime and feuding that go on without them. In the twentieth century, according to the political scientist R. J. Rummel in Death by Government, 170 million people were killed by their own governments. Nor is murder-by-government a relic of the tyrannies of the middle of the century. The World Conflict List for the year 2000 reported:
The stupidest conflict in this year’s count is Cameroon. Early in the year, Cameroon was experiencing widespread problems with violent crime. The government responded to this crisis by creating and arming militias and paramilitary groups to stamp out the crime extrajudicially. Now, while violent crime has fallen, the militias and paramilitaries have created far more chaos and death than crime ever would have. Indeed, as the year wore on mass graves were discovered that were tied to the paramilitary groups.
The pattern is familiar from other regions of the world (including our own) and shows that civil libertarians’ concern about abusive police practices is an indispensable counterweight to the monopoly on violence we grant the state.
FEMINISM IS OFTEN derided because of the arguments of its lunatic fringe—for example, that all intercourse is rape, that all women should be lesbians, or that only 10 percent of the population should be allowed to be male. Feminists reply that proponents of women’s rights do not speak with one voice, and that feminist thought comprises many positions, which have to be evaluated independently. That is completely legitimate, but it cuts both ways. To criticize a particular feminist proposal is not to attack feminism in general.
Anyone familiar with academia knows that it breeds ideological cults that are prone to dogma and resistant to criticism. Many women believe that this has now happened to feminism. In her book Who Stole Feminism? the philosopher Christina Hoff Sommers draws a useful distinction between two schools of thought. Equity feminism opposes sex discrimination and other forms of unfairness to women. It is part of the classical liberal and humanistic tradition that grew out of the Enlightenment, and it guided the first wave of feminism and launched the second wave. Gender feminism holds that women continue to be enslaved by a pervasive system of male dominance, the gender system, in which “bi-sexual infants are transformed into male and female gender personalities, the one destined to command, the other to obey.” It is opposed to the classical liberal tradition and allied instead with Marxism, postmodernism, social constructionism, and radical science. It has became the credo of some women’s studies programs, feminist organizations, and spokespeople for the women’s movement.
Equity feminism is a moral doctrine about equal treatment that makes no commitments regarding open empirical issues in psychology or biology. Gender feminism is an empirical doctrine committed to three claims about human nature. The first is that the differences between men and women have nothing to do with biology but are socially constructed in their entirety. The second is that humans possess a single social motive—power—and that social life can be understood only in terms of how it is exercised. The third is that human interactions arise not from the motives of people dealing with each other as individuals but from the motives of groups dealing with other groups—in this case, the male gender dominating the female gender.
It is not just gender feminism’s collision with science that repels many feminists. Like other inbred ideologies, it has produced strange excrescences, like the offshoot known as difference feminism. Carol Gilligan has become a gender-feminist icon because of her claim that men and women guide their moral reasoning by different principles: men think about rights and justice; women have feelings of compassion, nurturing, and peaceful accommodation. If true, it would disqualify women from becoming constitutional lawyers, Supreme Court justices, and moral philosophers, who make their living by reasoning about rights and justice. But it is not true. Many studies have tested Gilligan’s hypothesis and found that men and women differ little or not at all in their moral reasoning. So difference feminism offers women the worst of both worlds: invidious claims without scientific support. Similarly, the gender-feminist classic called Women’s Ways of Knowing claims that the sexes differ in their styles of reasoning. Men value excellence and mastery in intellectual matters and skeptically evaluate arguments in terms of logic and evidence; women are spiritual, relational, inclusive, and credulous. With sisters like these, who needs male chauvinists?
Gender feminism’s disdain for analytical rigor and classical liberal principles has recently been excoriated by equity feminists, among them Jean Bethke Elshtain, Elizabeth Fox-Genovese, Wendy Kaminer, Noretta Koertge, Donna Laframboise, Mary Lefkowitz, Wendy McElroy, Camille Paglia, Daphne Patai, Virginia Postrel, Alice Rossi, Sally Satel, Christina Hoff Sommers, Nadine Strossen, Joan Kennedy Taylor, and Cathy Young. Well before them, prominent women writers demurred from gender-feminist ideology, including Joan Didion, Doris Lessing, Iris Murdoch, Cynthia Ozick, and Susan Sontag. And ominously for the movement, a younger generation has rejected the gender feminists’ claims that love, beauty, flirtation, erotica, art, and heterosexuality are pernicious social constructs. The title of the book The New Victorians: A Young Woman’s Challenge to the Old Feminist Order captures the revolt of such writers as Rene Denfeld, Karen Lehrman, Katie Roiphe, and Rebecca Walker, and of the movements called Third Wave, Riot Grrrl Movement, Pro-Sex Feminism, Lipstick Lesbians, Girl Power, and Feminists for Free Expression.
The difference between gender feminism and equity feminism accounts for the oft-reported paradox that most women do not consider themselves feminists (about 70 percent in 1997, up from about 60 percent a decade before), yet they agree with every major feminist position. The explanation is simple: the word “feminist” is often associated with gender feminism, but the positions in the polls are those of equity feminism. Faced with these signs of slipping support, gender feminists have tried to stipulate that only they can be considered the true advocates of women’s rights. For example, in 1992 Gloria Steinem said of Paglia, “Her calling herself a feminist is sort of like a Nazi saying they’re not anti-Semitic.” And they have invented a lexicon of epithets for what in any other area would be called disagreement: “backlash,” “not getting it, silencing women,” “intellectual harassment.”
WHY ARE PEOPLE so afraid of the idea that the minds of men and women are not identical in every respect? Would we really be better off if everyone were like Pat, the androgynous nerd from Saturday Night Live? The fear, of course, is that different implies unequal—that if the sexes differed in any way, then men would have to be better, or more dominant, or have all the fun.
Nothing could be farther from biological thinking. Trivers alluded to a “symmetry in human relationships,” which embraced a "genetic equality of the sexes.” From a gene’s point of view, being in the body of a male and being in the body of a female are equally good strategies, at least on average (circumstances can nudge the advantage somewhat in either direction). Natural selection thus tends toward an equal investment in the two sexes: equal numbers, an equal complexity of bodies and brains, and equally effective designs for survival. Is it better to be the size of a male baboon and have six-inch canine teeth or to be the size of a female baboon and not have them? Merely to ask the question is to reveal its pointlessness. A biologist would say that it’s better to have the male adaptations to deal with male problems and the female adaptations to deal with female problems.
There is one more reason that acknowledging sex differences can be more humane than denying them. It is men and women, not the male gender and the female gender, who prosper or suffer, and those men and women are endowed with brains—perhaps not identical brains—that give them values and an ability to make choices. The choices should be respected. A regular feature of the lifestyle pages is the story about women who are made to feel ashamed about staying at home with their children. As they always say, “I thought feminism was supposed to be about choices.” The same should apply to women who do choose to work but also to trade off some income in order to “have a life” (and, of course, to men who make that choice). It is not obviously progressive to insist that equal numbers of men and women work eighty-hour weeks in a corporate law firm or leave their families for months at a time to dodge steel pipes on a frigid oil platform. And it is grotesque to demand (as advocates of gender parity did in the pages of Science) that more young women “be conditioned to choose engineering,” as if they were rats in a Skinner box.
Gottfredson points out, “If you insist on using gender parity as your measure of social justice, it means you will have to keep many men and women out of the work they like best and push them into work they don’t like.” She is echoed by Kleinfeld on the leaky pipeline in science: “We should not be sending [gifted] women the messages that they are less worthy human beings, less valuable to our civilization, lazy or low in status, if they choose to be teachers rather than mathematicians, journalists rather than physicists, lawyers rather than engineers.” These are not hypothetical worries: a recent survey by the National Science Foundation found that many more women than men say they majored in science, mathematics, or engineering under pressure from teachers or family members rather than to pursue their own aspirations—and that many eventually switched out for that reason. I will give the final word to Margaret Mead, who, despite being wrong in her early career about the malleability of gender, was surely right when she said, “If we are to achieve a richer culture, rich in contrasting values, we must recognize the whole gamut of human potentialities, and so weave a less arbitrary social fabric, one in which each diverse human gift will find a fitting place.”
The violence-not-sex slogan is right about two things. Both parts are absolutely true for the victim: a woman who is raped experiences it as a violent assault, not as a sexual act. And the part about violence is true for the perpetrator by definition: if there is no violence or coercion, we do not call it rape. But the fact that rape has something to do with violence does not mean it has nothing to do with sex, any more than the fact that armed robbery has something to do with violence means it has nothing to do with greed. Evil men may use violence to get sex, just as they use violence to get other things they want.
I believe that the rape-is-not-about-sex doctrine will go down in history as an example of extraordinary popular delusions and the madness of crowds. It is preposterous on the face of it, does not deserve its sanctity, is contradicted by a mass of evidence, and is getting in the way of the only morally relevant goal surrounding rape, the effort to stamp it out.
Think about it. First obvious fact: Men often want to have sex with women who don't want to have sex with them. They use every tactic that one human being uses to affect the behavior of another: wooing, seducing, flattering, deceiving, sulking, and paying. Second obvious fact: Some men use violence to get what they want, indifferent to the suffering they cause. Men have been known to kidnap children for ransom (sometimes sending their parents an ear or finger to show they mean business), blind the victim of a mugging so the victim can't identify them in court, shoot out the kneecaps of an associate as punishment for ratting to the police or invading their territory, and kill a stranger for his brand-name athletic footwear. It would be an extraordinary fact, contradicting everything else we know about people, if some men didn’t use violence to get sex.
Let’s also apply common sense to the doctrine that men rape to further the interests of their gender. A rapist always risks injury at the hands of the woman defending herself. In a traditional society, he risks torture, mutilation, and death at the hands of her relatives. In a modern society, he risks a long prison term. Are rapists really assuming these risks as an altruistic sacrifice to benefit the billions of strangers that make up the male gender? The idea becomes even less credible when we remember that rapists tend to be losers and nobodies, while presumably the main beneficiaries of the patriarchy are the rich and powerful. Men do sacrifice themselves for the greater good in wartime, of course, but they are either conscripted against their will or promised public adulation when their exploits are made public. But rapists usually commit their acts in private and try to keep them secret. And in most times and places, a man who rapes a woman in his community is treated as scum. The idea that all men are engaged in brutal warfare against all women clashes with the elementary fact that men have mothers, daughters, sisters, and wives, whom they care for more than they care for most other men. To put the same point in biological terms, every person’s genes are carried in the bodies of other people, half of whom are of the opposite sex.
Yes, we must deplore the sometimes casual treatment of women’s autonomy in popular culture. But can anyone believe that our culture literally “teaches men to rape” or “glorifies the rapist”? Even the callous treatment of rape victims in the judicial system of yesteryear has a simpler explanation than that all men benefit by rape. Until recently jurors in rape cases were given a warning from the seventeenth-century jurist Lord Matthew Hale that they should evaluate a woman's testimony with caution, because a rape charge is “easily made and difficult to defend against, even if the accused is innocent.” The principle is consistent with the presumption of innocence built into our judicial system and with its preference to let ten guilty people go free rather than jail one innocent. Even so, let’s suppose that the men who applied this policy to rape did tilt it toward their own collective interests. Let’s suppose that they leaned on the scales of justice to minimize their own chances of ever being falsely accused of rape (or accused under ambiguous circumstances) and that they placed insufficient value on the injustice endured by women who would not see their assailants put behind bars. That would indeed be unjust, but it is still not the same thing as encouraging rape as a conscious tactic to keep women down. If that were men’s tactic, why would they have made rape a crime in the first place?
As for the morality of believing the not-sex theory, there is none. If we have to acknowledge that sexuality can be a source of conflict and not just wholesome mutual pleasure, we will have rediscovered a truth that observers of the human condition have noted throughout history. And if a man rapes for sex, that does not mean that he “just can’t help it” or that we have to excuse him, any more than we have to excuse the man who shoots the owner of a liquor store to raid the cash register or who bashes a driver over the head to steal his BMW. The great contribution of feminism to the morality of rape is to put issues of consent and coercion at center stage. The ultimate motives of the rapist are irrelevant.
Finally, think about the humanity of the picture that the gender-feminist theory has painted. As the equity feminist Wendy McElroy points out, the theory holds that “even the most loving and gentle husband, father, and son is a beneficiary of the rape of women they love. No ideology that makes such vicious accusations against men as a class can heal any wounds. It can only provoke hostility in return.”
Scientific research on rape and its connections to human nature was thrown into the spotlight in 2000 with the publication of A Natural History of Rape. Thornhill and Palmer began with a basic observation: a rape can result in a conception, which could propagate the genes of the rapist, including any genes that had made him likely to rape. Therefore, a male psychology that included a capacity to rape would not have been selected against, and could have been selected for. Thornhill and Palmer argued that rape is unlikely to be a typical mating strategy because of the risk of injury at the hands of the victim and her relatives and the risk of ostracism from the community. But it could be an opportunistic tactic, becoming more likely when the man is unable to win the consent of women, alienated from a community (and thus undeterred by ostracism), and safe from detection and punishment (such as in wartime or pogroms). Thornhill and Palmer then outlined two theories. Opportunistic rape could be a Darwinian adaptation that was specifically selected for, as in certain insects that have an appendage with no function other than restraining a female during forced copulation. Or rape could be a by-product of two other features of the male mind: a desire for sex and a capacity to engage in opportunistic violence in pursuit of a goal. The two authors disagreed on which hypothesis was better supported by the data, and they left that issue unresolved.
No honest reader could conclude that the authors think rape is “natural” in the vernacular sense of being welcome or unavoidable. The first words of the book are, “As scientists who would like to see rape eradicated from human life . . .,” which are certainly not the words of people who think it is inevitable. Thornhill and Palmer discuss the environmental circumstances that affect the likelihood of rape, and they offer suggestions on how to reduce it. The idea that most men have the capacity to rape works, if anything, in the interests of women, because it calls for vigilance against acquaintance rape, marital rape, and rape during societal breakdowns. Indeed, the analysis jibes with Brownmiller’s own data that ordinary men, including “nice” American boys in Vietnam, may rape in wartime. For that matter, Thornhill and Palmer’s hypothesis that rape is on a continuum with the rest of male sexuality makes them strange allies with the most radical gender feminists, such as Catharine MacKinnon and Andrea Dworkin, who said that “seduction is often difficult to distinguish from rape. In seduction, the rapist often bothers to buy a bottle of wine.”
Most important, the book focuses in equal part on the pain of the victims. (Its draft title was Why Men Rape, Why Women Suffer.) Thornhill and Palmer explain in Darwinian terms why females throughout the animal kingdom resist being forced into sex, and argue that the agony that rape victims feel is deeply rooted in women’s nature. Rape subverts female choice, the core of the ubiquitous mechanism of sexual selection. By choosing the male and the circumstances for sex, a female can maximize the chances that her offspring will be fathered by a male with good genes, a willingness and ability to share the responsibility of rearing the offspring, or both. As John Tooby and Leda Cosmides have put it, this ultimate (evolutionary) calculus explains why women evolved “to exert control over their own sexuality, over the terms of their relationships, and over the choice of which men are to be the fathers of their children.” They resist being raped, and they suffer when their resistance fails, because “control over their sexual choices and relationships was wrested from them.”
Thornhill and Palmer’s theory reinforces many points of an equity-feminist analysis. It predicts that from the woman’s point of view, rape and consensual sex are completely different. It affirms that women’s repugnance toward rape is not a symptom of neurotic repression, nor is it a social construct that could easily be the reverse in a different culture. It predicts that the suffering caused by rape is deeper than the suffering caused by other physical traumas or body violations. That justifies our working harder to prevent rape, and punishing the perpetrators more severely, than we do for other kinds of assault. Compare this analysis with the dubious claim by two gender feminists that an aversion to rape has to be pounded into women by every social influence they can think of:
Female fear (results) not only from women’s personal backgrounds but from what women as a group have imbibed from history, religion, culture, social institutions, and everyday social interactions. Learned early in life, female fear is continually reinforced by such social institutions as the school, the church, the law, and the press. Much is also learned from parents, siblings, teachers, and friends.
THE PAYOFF FOR a reality-based understanding of rape is the hope of reducing or eliminating it. Given the theories on the table, the possible sites for levers of influence include violence, sexist attitudes, and sexual desire.
Everyone agrees that rape is a crime of violence. Probably the biggest amplifier of rape is lawlessness. The rape and abduction of women is often a goal of raiding in non-state societies, and rape is common in wars between states and riots between ethnic groups. In peacetime, the rates of rape tend to track rates of other violent crime. In the United States, for example, the rate of forcible rape went up in the 1960s and down in the 1990s, together with the rates of other violent crimes. Gender feminists blame violence against women on civilization and social institutions, but this is exactly backwards. Violence against women flourishes in societies that are outside the reach of civilization, and erupts whenever civilization breaks down.
Though I know of no quantitative studies, the targeting of sexist attitudes does not seem to be a particularly promising avenue for reducing rape, though of course it is desirable for other reasons. Countries with far more rigid gender roles than the United States, such as Japan, have far lower rates of rape, and within the United States the sexist 1950s were far safer for women than the more liberated 1970s and 1980s. If anything, the correlation might go in the opposite direction. As women gain greater freedom of movement because they are independent of men, they will more often find themselves in dangerous situations. What about measures that focus on the sexual components of rape? Thornhill and Palmer suggested that teenage boys be forced to take a rape-prevention course as a condition for obtaining a driver’s license, and that women should be reminded that dressing in a sexually attractive way may increase their risk of being raped. These untested prescriptions are an excellent illustration of why scientists should stay out of the policy business, but they don’t deserve the outrage that followed. Mary Koss, described as an authority on rape, said, “The thinking is absolutely unacceptable in a democratic society.” (Note the psychology of taboo—not only is their suggestion wrong, but merely thinking it is “absolutely unacceptable.”) Koss continues, “Because rape is a gendered crime, such recommendations harm equality. They infringe more on women’s liberties than men’s.”
One can understand the repugnance at any suggestion that an attractively dressed woman excites an irresistible impulse to rape, or that culpability in any crime should be shifted from the perpetrator to the victim. But Thornhill and Palmer said neither of those things. They were offering a recommendation based on prudence, not an assignment of blame based on justice. Of course women have a right to dress in any way they please, but the issue is not what women have the right to do in a perfect world but how they can maximize their safety in this world. The suggestion that women in dangerous situations be mindful of reactions they may be eliciting or signals they may inadvertently be sending is just common sense, and it’s hard to believe any grownup would think otherwise—unless she has been indoctrinated by the standard rape-prevention programs that tell women that “sexual assault is not an act of sexual gratification” and that “appearance and attractiveness are not relevant.” Equity feminists have called attention to the irresponsibility of such advice, in terms far harsher than anything by Thornhill and Palmer. Paglia, for example, wrote:
For a decade, feminists have drilled their disciples to say, “Rape is a crime of violence but not sex.” This sugar-coated Shirley Temple nonsense has exposed young women to disaster. Misled by feminism, they do not expect rape from the nice boys from good homes who sit next to them in class. . . .
These girls say, “Well, I should be able to get drunk at a fraternity party and go upstairs to a guy’s room without anything happening.” And I say, “Oh, really? And when you drive your car to New York City, do you leave your keys on the hood?” My point is that if your car is stolen after you do something like that, yes, the police should pursue the thief and he should be punished. But at the same time, the police—and I—have the right to say to you, “You stupid idiot, what the hell were you thinking?”
Similarly, McElroy points out the illogic of arguments like Koss’s that women should not be given practical advice that “infringes more on women’s liberties than men’s”:
The fact that women are vulnerable to attack means we cannot have it all. We cannot walk at night across an unlit campus or down a back alley, without incurring real danger. These are things every woman should be able to do, but “should” belong in a utopian world. They belong in a world where you drop your wallet in a crowd and have it returned, complete with credit cards and cash. A world in which unlocked Porsches are parked in the inner city. And children can be left unattended in the park. This is not the reality that confronts and confines us.
The three laws of behavioral genetics may be the most important discoveries in the history of psychology. Yet most psychologists have not come to grips with them, and most intellectuals do not understand them, even when they have been explained in the cover stories of newsmagazines. It is not because the laws are abstruse: each can be stated in a sentence, without mathematical paraphernalia. Rather, it is because the laws run roughshod over the Blank Slate, and the Blank Slate is so entrenched that many intellectuals cannot comprehend an alternative to it, let alone argue about whether it is right or wrong.
Here are the three laws:
The First Law: All human behavioral traits are heritable.
The Second Law: The effect of being raised in the same family is smaller than the effect of the genes.
The Third Law: A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families.
It’s not that parents “don’t matter.” In many ways parents matter a great deal. For most of human existence, the most important thing parents did for their children was keep them alive. Parents can certainly harm their children by abusing or neglecting them. Children appear to need some kind of nurturing figure in their early years, though it needn’t be a parent, and possibly not even an adult: young orphans and refugees often turn out relatively well if they had the comfort of other children, even if they had no parents or other adults around them. (This does not mean that the children were happy, but contrary to popular belief, unhappy children do not necessarily turn into dysfunctional adults.) Parents select an environment for their children and thereby select a peer group. They provide their children with skills and knowledge, such as reading and playing a musical instrument. And they certainly may affect their children’s behavior in the home, just as any powerful people can affect behavior within their fiefdom. But parents’ behavior does not seem to shape their children’s intelligence or personality over the long term. Upon hearing this, many people ask, “So you’re saying it doesn’t matter how I treat my child?” It is a revealing question…
NOT EVERYONE IS so accepting of fate, or of the other forces beyond a parent’s control, like genes and peers. “I hope to God this isn’t true,” one mother said to the Chicago Tribune. “The thought that all this love that I’m pouring into him counts for nothing is too terrible to contemplate.” As with other discoveries about human nature, people hope to God it isn’t true. But the truth doesn’t care about our hopes, and sometimes it can force us to revisit those hopes in a liberating way.
Yes, it is disappointing that there is no algorithm for growing a happy and successful child. But would we really want to specify the traits of our children in advance, and never be delighted by the unpredictable gifts and quirks that every child brings into the world? People are appalled by human cloning and its dubious promise that parents can design their children by genetic engineering. But how different is that from the fantasy that parents can design their children by how they bring them up? Realistic parents would be less anxious parents. They could enjoy their time with their children rather than constantly trying to stimulate them, socialize them, and improve their characters. They could read stories to their children for the pleasure of it, not because it’s good for their neurons.
Many critics accuse [Judith Rich] Harris of trying to absolve parents of responsibility for their children’s lives: if the kids turn out badly, parents can say it’s not their fault. But by the same token she is assigning adults responsibility for their own lives: if your life is not going well, stop moaning that it’s all your parents’ fault. She is rescuing mothers from fatuous theories that blame them for every misfortune that befalls their children, and from the censorious know-it-alls who make them feel like ogres if they slip out of the house to work or skip a reading of Goodnight Moon. And the theory assigns us all a collective responsibility for the health of the neighborhoods and culture in which peer groups are embedded.
Finally: “So you’re saying it doesn’t matter how I treat my children?” What a question! Yes, of course it matters. Harris reminds her readers of the reasons.
First, parents wield enormous power over their children, and their actions can make a big difference to their happiness. Childrearing is above all an ethical responsibility. It is not OK for parents to beat, humiliate, deprive, or neglect their children, because those are awful things for a big strong person to do to a small helpless one. As Harris writes, “We may not hold their tomorrows in our hands but we surely hold their todays, and we have the power to make their todays very miserable.”
Second, a parent and a child have a human relationship. No one ever asks, “So you’re saying it doesn’t matter how I treat my husband or wife?” even though no one but a newlywed believes that one can change the personality of one’s spouse. Husbands and wives are nice to each other (or should be) not to pound the other’s personality into a desired shape but to build a deep and satisfying relationship. Imagine being told that one cannot revamp the personality of a husband or wife and replying, “The thought that all this love I’m pouring into him (or her) counts for nothing is too terrible to contemplate.” So it is with parents and children: one person’s behavior toward another has consequences for the quality of the relationship between them. Over the course of a lifetime the balance of power shifts, and children, complete with memories of how they were treated, have a growing say in their dealings with their parents. As Harris puts it, “If you don’t think the moral imperative is a good enough reason to be nice to your kid, try this one: Be nice to your kid when he’s young so that he will be nice to you when you’re old.” There are well-functioning adults who still shake with rage when recounting the cruelties their parents inflicted on them as children. There are others who moisten up in private moments when recalling a kindness or sacrifice made for their happiness, perhaps one that the mother or father has long forgotten. If for no other reason, parents should treat their children well to allow them to grow up with such memories.
I have found that when people hear these explanations they lower their eyes and say, somewhat embarrassedly, “Yes. I knew that.” The fact that people can forget these simple truths when intellectualizing about children shows how far modern doctrines have taken us. They make it easy to think of children as lumps of putty to be shaped instead of partners in a human relationship. Even the theory that children adapt to their peer group becomes less surprising when we think of them as human beings like ourselves. “Peer group” is a patronizing term we use in connection with children for what we call “friends and colleagues and associates” when we talk about ourselves. We groan when children obsess over wearing the right kind of cargo pants, but we would be just as mortified if a very large person forced us to wear pink overalls to a corporate board meeting or a polyester disco suit to an academic conference. “Being socialized by a peer group” is another way of saying “living successfully within a society,” which for a social organism means “living.” It is children, above all, who are alleged to be blank slates, and that can make us forget they are people.
Once again, postmodernism took this extreme to an even greater extreme in which the theory upstaged the subject matter and became a genre of performance art in itself. Postmodernist scholars, taking off from the critical theorists Theodor Adorno and Michel Foucault, distrust the demand for “linguistic transparency” because it hobbles the ability “to think the world more radically” and puts a text in danger of being turned into a mass-market commodity. This attitude has made them regular winners of the annual Bad Writing Contest, which “celebrates the most stylistically lamentable passages found in scholarly books and articles.” In 1998, first prize went to the lauded professor of rhetoric at Berkeley, Judith Butler, for the following sentence:
The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.
Dutton, whose journal Philosophy and Literature sponsors the contest, assures us that this is not a satire. The rules of the contest forbid it: “Deliberate parody cannot be allowed in a field where unintended self-parody is so widespread.”
A final blind spot to human nature is the failure of contemporary artists and theorists to deconstruct their own moral pretensions. Artists and critics have long believed that an appreciation of elite art is ennobling and have spoken of cultural philistines in tones ordinarily reserved for child molesters (as we see in the two meanings of the word barbarian). The affectation of social reform that surrounds modernism and postmodernism is part of this tradition.
Though moral sophistication requires an appreciation of history and cultural diversity, there is no reason to think that the elite arts are a particularly good way to instill it compared with middlebrow realistic fiction or traditional education. The plain fact is that there are no obvious moral consequences to how people entertain themselves in their leisure time. The conviction that artists and connoisseurs are morally advanced is a cognitive illusion, arising from the fact that our circuitry for morality is cross-wired with our circuitry for status (see Chapter 15). As the critic George Steiner has pointed out, “We know that a man can read Goethe or Rilke in the evening, that he can play Bach and Schubert, and go to his day’s work at Auschwitz in the morning.” Conversely there must be many unlettered people who give blood, risk their lives as volunteer firefighters, or adopt handicapped children, but whose opinion of modern art is “My four-year-old daughter could have done that.”
The moral and political track record of modernist artists is nothing to be proud of. Some were despicable in the conduct of their personal lives, and many embraced fascism or Stalinism. The modernist composer Karlheinz Stockhausen described the September 11, 2001, terrorist attacks as “the greatest work of art imaginable for the whole cosmos” and added, enviously, that “artists, too, sometimes go beyond the limits of what is feasible and conceivable, so that we wake up, so that we open ourselves to another world.” Nor is the theory of postmodernism especially progressive. A denial of objective reality is no friend to moral progress, because it prevents one from saying, for example, that slavery or the Holocaust really took place. And as Adam Gopnik has pointed out, the political messages of most postmodernist pieces are utterly banal, like “racism is bad.” But they are stated so obliquely that viewers are made to feel morally superior for being able to figure them out.
As for sneering at the bourgeoisie, it is a sophomoric grab at status with no claim to moral or political virtue. The fact is that the values of the middle class—personal responsibility, devotion to family and neighborhood, avoidance of macho violence, respect for liberal democracy—are good things, not bad things. Most of the world wants to join the bourgeoisie, and most artists are members in good standing who adopted a few bohemian affectations. Given the history of the twentieth century, the reluctance of the bourgeoisie to join mass utopian uprisings can hardly be held against them. And if they want to hang a painting of a red barn or a weeping clown above their couch, it’s none of our damn business.
The dominant theories of elite art and criticism in the twentieth century grew out of a militant denial of human nature. One legacy is ugly, baffling, and insulting art. The other is pretentious and unintelligible scholarship. And they’re surprised that people are staying away in droves?
KURT VONNEGUT’S STORY “Harrison Bergeron” is as transparent as Dickinson’s poem is cryptic. Here is how it begins:
The year was 2081, and everybody was finally equal. They weren’t only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than anybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th, and 213th Amendments to the Constitution, and to the unceasing vigilance of agents of the United States Handicapper General.
The Handicapper General enforces equality by neutralizing any inherited (hence undeserved) asset. Intelligent people have to wear radios in their ears tuned to a government transmitter that sends out a sharp noise every twenty seconds (such as the sound of a milk bottle struck with a ball-peen hammer) to prevent them from taking unfair advantage of their brains. Ballerinas are laden with bags of birdshot and their faces are hidden by masks so that no one can feel bad at seeing someone prettier or more graceful than they. Newscasters are selected for their speech impediments. The hero of the story is a multiply gifted teenager forced to wear headphones, thick wavy glasses, three hundred pounds of scrap iron, and black caps on half his teeth. The story is about his ill-fated rebellion.
Subtle it is not, but “Harrison Bergeron” is a witty reductio of an all too common fallacy. The ideal of political equality is not a guarantee that people are innately indistinguishable. It is a policy to treat people in certain spheres (justice, education, politics) on the basis of their individual merits rather than the statistics of any group they belong to. And it is a policy to recognize inalienable rights in all people by virtue of the fact that they are sentient human beings. Policies that insist that people be identical in their outcomes must impose costs on humans who, like all living things, vary in their biological endowment. Since talents by definition are rare, and can be fully realized only in rare circumstances, it is easier to achieve forced equality by lowering the top (and thereby depriving everyone of the fruits of people's talents) than by raising the bottom. In Vonnegut’s America of 2081 the desire for equality of outcome is played out as a farce, but in the twentieth century it frequently led to real crimes against humanity, and in our own society the entire issue is often a taboo.
Vonnegut is a beloved author who has never been called a racist, sexist, elitist, or Social Darwinist. Imagine the reaction if he had stated his message in declarative sentences rather than in a satirical story. Every generation has its designated jokers, from Shakespearean fools to Lenny Bruce, who give voice to truths that are unmentionable in polite society. Today part-time humorists like Vonnegut, and full-time ones like Richard Pryor, Dave Barry, and the writers of The Onion, are continuing that tradition.
Nineteen Eighty-four was unforgettable literature, not just a political screed, because of the way Orwell thought through the details of how his society would work. Every component of the nightmare interlocked with the others to form a rich and credible whole: the omnipresent government, the eternal war with shifting enemies, the totalitarian control of the media and private life, the Newspeak language, the constant threat of personal betrayal.
Less widely known is that the regime had a well-articulated philosophy. It is explained to Winston Smith in the harrowing sequence in which he is strapped to a table and alternately tortured and lectured by the government agent O’Brien. The philosophy of the regime is thoroughly postmodernist, O’Brien explains (without, of course, using the word). When Winston objects that the Party cannot realize its slogan, “Who controls the past controls the future; who controls the present controls the past,” O’Brien replies:
You believe that reality is something objective, external, existing in its own right. You also believe that the nature of reality is self-evident. When you delude yourself into thinking that you see something, you assume that everyone else sees the same thing as you. But I tell you, Winston, that reality is not external. Reality exists in the human mind, and nowhere else. Not in the individual mind, which can make mistakes, and in any case soon perishes; only in the mind of the Party, which is collective and immortal.
O’Brien admits that for certain purposes, such as navigating the ocean, it is useful to assume that the Earth goes around the sun and that there are stars in distant galaxies. But, he continues, the Party could also use alternative astronomies in which the sun goes around the Earth and the stars are bits of fire a few kilometers away. And though O’Brien does not explain it in this scene, Newspeak is the ultimate “prisonhouse of language,” a “language that thinks man and his ‘world.’”
O’Brien’s lecture should give pause to the advocates of postmodernism. It is ironic that a philosophy that prides itself on deconstructing the accoutrements of power should embrace a relativism that makes challenges to power impossible, because it denies that there are objective benchmarks against which the deceptions of the powerful can be evaluated. For the same reason, the passages should give pause to radical scientists who insist that other scientists’ aspirations to theories with objective reality (including theories about human nature) are really weapons to preserve the interests of the dominant class, gender, and race. Without a notion of objective truth, intellectual life degenerates into a struggle of who can best exercise the raw force to “control the past.”
A second precept of the Party’s philosophy is the doctrine of the superorganism:
Can you not understand, Winston, that the individual is only a cell? The weariness of the cell is the vigor of the organism. Do you die when you cut your fingernails?
The doctrine that a collectivity (a culture, a society, a class, a gender) is a living thing with its own interests and belief system lies behind Marxist political philosophies and the social science tradition begun by Durkheim. Orwell is showing its dark side: the dismissal of the individual—the only entity that literally feels pleasure and pain—as a mere component that exists to further the interests of the whole. The sedition of Winston and his lover Julia began in the pursuit of simple human pleasures—sugar and coffee, white writing paper, private conversation, affectionate lovemaking. O’Brien makes it clear that such individualism will not be tolerated: “There will be no loyalty, except loyalty to the Party. There will be no love, except the love of Big Brother.”
The Party also believes that emotional ties to family and friends are “habits” that get in the way of a smoothly functioning society:
Already we are breaking down the habits of thought that have survived from before the Revolution. We have cut the links between child and parent, and between man and man, and between man and woman. No one dares trust a wife or a child or a friend any longer. But in the future there will be no wives and no friends. Children will be taken from their mothers at birth, as one takes eggs from a hen. The sex instinct will be eradicated. . . . There will be no distinction between beauty and ugliness.
It is hard to read the passage and not think of the current enthusiasm for proposals in which enlightened mandarins would reengineer childrearing, the arts, and the relationship between the sexes in an effort to build a better society.
Laatste woorden
Informatie afkomstig uit de Engelse Algemene Kennis.Bewerk om naar jouw taal over te brengen.
It is a scene that has the voice of the species in it: that infuriating, endearing, mysterious, predictable, and eternally fascinating thing we call human nature.
In "The Blank Slate", Steven Pinker, one of the world's leading experts on language and the mind, explores the idea of human nature and it's moral, emotional, and political colorings. With characteristic wit, lucidity, and insight, Pinker argues that the dogma that the mind has no innate traits, a doctrine held by many intellectuals during the past century, denies our common humanity and our individual perferences, replaces objective analyses of social problems with feel-good slogans, and distorts our understanding of politics, violence, parenting, and the arts.
Each society has a theory of human nature, which is rarely referenced, but shapes beliefs and policies. Theories of human behavior range from mainly genetics to mainly social constructs. Human behavior is shaped by nature and nurture. Evolution coded in genetics has limited resources and ability to anticipate various complex situations, therefore cannot do everything. Nurture coded in social constructs have physical limitations, therefore cannot do everything. There are contexts which can be explained with mainly nurture or nature. Social constructs like language are nurture, while genetic disorders are nature. Within most contexts, nature and nurture work together. There are complex interactions between genes and their environment.
The blank slate is a reference to an extreme nurture view of the human mind. That the human mind has no inherent structure, in which society and the individual can inscribe values. Differences in behavior come about through differences in experiences. By changing the experiences, can the individual change. This implies that problematic behavior can be ameliorated. There are limitations within this perspective. Blank slates cannot do anything because they would not have the innate circuitry for learning or understanding. While culture shapes thought, thought could not come about without a biological entity capable of learning. Humans are biologically distinguishable, and are constrained in their choices.
More On Evolution, and the Blank Slate:
Genes cannot provide a complete blueprint. They have limited resources, which means can only be so big. Genes cannot anticipate the complexity of the environment and behavior of other genes. To compensate, genes have developed a program that enables learning mechanisms such as feedback, which generate information with which to adjust behavior.
There are many limitations of the blank slate perspective. The mind creates a model of the world, but the model is based on the physical world. It takes a perceiver with information to decipher patters, combine patters with priorly learned patterns, and use them to obtain new thoughts that guide behavior.
Humans have the capacity to learn, and interpret information if a myriad of ways. With finite information processing, can an infinite range of behavior be generated. Culture is a cumulative pool of information that enables coordination of expectations about each other’s behavior. Genes do not create cultures, but cultures do not impact formless minds.
Recognition of biological differences has caused many unfavorable conclusions such as prejudice, Social Darwinism, and eugenics. Biological constraints prevent complete reshaping of human behavior, and can be seen as deterministic.
Caveats?
The claims about nature and nurture are sensitive topics, which garner controversy. The author attempts to provide a more appropriate and neutral explanation for how they shape human society. The problem is that the way in which the book is written, is not favorable to neutrality. ( )