Natural-Born Pianists?

The editor of The Guardian, Alan Rusbridger, has written a book about how he decided to practice the piano 20 minutes a day. Eighteen months later, he played Chopin’s fearsomely difficult Ballade No. 1 in G Minor to an admiring audience of friends. Could anyone have done this? Or did it require special talent?

The nature-versus-nurture debate has been around a long time. It is unresolved because the scientific question has always been entangled with politics. Broadly speaking, those stressing inborn capacity have been political conservatives; those emphasizing nurture have been political radicals.

The nineteenth-century philosopher John Stuart Mill was of the “anyone can do it” school. He was convinced that his achievements were in no way due to superior heredity: anyone of “normal intelligence and health,” subjected to his father’s educational system – which included learning Greek at the age of three – could have become John Stuart Mill.

Mill was part of his century’s liberal attack on aristocratic privilege: achievement was the result of opportunity, not birth. The practice of the faculties (education) unleashes potential that would otherwise remain dormant.

Charles Darwin seemingly overturned this optimistic view of the potential beneficial effects of nurture. Species evolve, Darwin said, through “natural selection” – the random selection, through competition, of biological traits favorable to survival in a world of scarce resources. Herbert Spencer used the phrase “survival of the fittest” to explain how societies evolve.

Social Darwinists interpreted natural selection to mean that any humanitarian effort to improve the condition of the poor would impede the progress of the human race by burdening it with an excess of drones. Society would be spending scarce resources on losers rather than winners. This fit the ideology of a brand of capitalism that was “red in tooth and claw.”

Indeed, social Darwinism provided a pseudo-scientific justification for the American belief in laissez-faire (with the successful businessman epitomizing the survival of the fittest); for eugenics (the deliberate attempt to breed superior individuals, on the model of horse-breeding, and prevent the “over-breeding” of the unfit); and for the eugenic-cum-racial theories of Nazism.

In reaction to social Darwinism’s murderous tendencies, Mill’s view became dominant after World War II in the form of social democracy. State action to improve diet, education, health, and housing would enable the poor to realize their potential. Competition as a social principle was downgraded in favor of cooperation.

Differences in innate ability were not denied (at least by the sensible). But it was rightly felt that there was a huge amount of work to be done to raise average levels of achievement before one needed to start worrying that one’s policies were promoting the survival of the unfit.

Then the mood began to shift again. Social democracy was attacked for penalizing the successful and rewarding the unsuccessful. In 1976, the biologist Richard Dawkins identified the unit of Darwinian selection as the “selfish gene.” The evolutionary story was now recast as a battle of genes to secure their survival over time by means of mutations, which create individuals (phenotypes) best adapted to pass on their genes. In the course of evolution, the inferior phenotypes disappear.

Although this view of evolution would not have been possible before the discovery of DNA, it is no coincidence that it rose to prominence in the age of Ronald Reagan and Margaret Thatcher. To be sure, the selfish gene needs to be “altruistic” insofar as its survival depends on the survival of the kinship group. But it need not be that altruistic. And, although Dawkins later regretted calling his gene “selfish” (he says that “immortal” would have been better), his choice of adjective was certainly best adapted to maximizing sales of his book at that particular time.

Since then, we have turned away from advocacy of selfishness, but we have not recovered an independent moral language. The new orthodoxy, suitable for a world in which the unrestrained pursuit of greed has proved economically disastrous, is that the human species is genetically programmed to be moral, because only by acting morally (caring for the survival of others) can it ensure its long-term survival.1

The “hard-wiring” metaphor dominates contemporary moral language. According to the United Kingdom’s chief rabbi, Jonathan Sacks, religious beliefs are useful for our survival, by inducing us to act in socially cooperative ways: “We have mirror neurons that lead us to feel pain when we see others suffering,” he recently wrote. Consideration for others is “located in the prefrontal cortex.” And religion “reconfigures our neural pathways.” In short: “Far from refuting religion, the Neo-Darwinists have helped us understand why it matters.” So we need not fear that it will decline.

Atheists might not agree. But it is an extraordinary statement for a religious leader to make, because it sets to one side the question of the truth or falsehood, or the ethical value, of religious beliefs. Or rather: all that wiring in the prefrontal cortex must be ethical, because it is good for survival. But, in that case, what ethical value is there in survival? Does the continued survival of the human race have any value in itself, independent of what we achieve or create?

We need to rescue morality from the claims of science. We need to assert what philosophers and religious teachers have at all times asserted: that there is something called the good life, apart from survival, and our understanding of it has to be taught, just as Mill’s father taught him the elements of Aristotle’s Posterior Analytics. Our nature may predispose us to learn; but what we learn depends on how we are nurtured.

Advertisement