Sunshine Recorder

Link: Genetics and Homosexuality

Sexual preference is one of the most strongly genetically determined behavioural traits we know of. A single genetic element is responsible for most of the variation in this trait across the population. Nearly all (>95%) of the people who inherit this element are sexually attracted to females, while about the same proportion of people who do not inherit it are attracted to males. This attraction is innate, refractory to change and affects behaviour in stereotyped ways, shaped and constrained by cultural context. It is the commonest and strongest genetic effect on behaviour that we know of in humans (in all mammals, actually). The genetic element is of course the Y chromosome.

The idea that sexual behaviour can be affected by – even largely determined by – our genes is therefore not only not outlandish, it is trivially obvious. Yet claims that differences in sexual orientationmay have at least a partly genetic basis seem to provoke howls of scepticism and outrage from many, mostly based not on scientific arguments but political ones.

The term sexual orientation refers to whether your sexual preference matches the typical preference based on whether or not you have a Y chromosome. It is important to realise that it therefore refers to four different states, not two: (i) has Y chromosome, is attracted to females; (ii) has Y chromosome, is attracted to males; (iii) does not have Y chromosome, is attracted to males; (iv) does not have Y chromosome, is attracted to females. We call two of these states heterosexual and two of them homosexual. (This ignores the many inpiduals whose sexual preferences are not so exclusive or rigid).

A recent twin study confirms that sexual orientation is moderately heritable – that is, that variation in genes contributes to variation in this trait. These effects are detected by looking at pairs of twins and determining how often, when one of them is homosexual, the other one is too. This rate is much higher (30-50%) in monozygotic, or identical, twins (who share all of their DNA sequence), than in dizygotic, or fraternal, twins (who share only half of their DNA), where the rate is 10-20%. If we assume that the environments of pairs of mono- or dizygotic twins are equally similar, then we can infer that the increased similarity in sexual orientation in pairs of monozygotic twins is due to their increased genetic similarity.

These data are not yet published (or peer reviewed) but were presented by Dr. Michael Bailey at the recent American Association for the Advancement of Science meeting (Feb 12th 2014) and widely reported on. They confirm and extend findings from multiple previous twin studies across several different countries, which have all found fairly similar results (see here for more details). Overall, the conclusion that sexual orientation is partly heritable was already firmly made. 

The reaction to news of this recent study reveals a deep disquiet with the idea that homosexuality may arise due to genetic differences. First, there are those who scoff at the idea that such a complex behaviour could be determined by what may be only a small number of genetic differences – perhaps only one. As I recently discussed, this view is based on a fundamental misunderstanding of what genetic findings really mean. Finding that a trait (a difference in some system) can be affected by a single genetic difference does not mean a single gene is responsible for crafting the entire system – it simply means that the system does not work normally in the absence of that gene. (Just as a car does not work well without its steering wheel).

Others have expressed a variety of personal and political reactions to these findings, ranging from welcoming further evidence of a biological basis for sexual orientation to worry that it will be used to label homosexuality a genetic disorder and even to enable selective abortion based on genetic prediction. The latter possibility may be made more technically feasible by the other aspect of the recently reported study, which was the claim that they have mapped genetic variants affecting sexual orientation to two specific regions of thegenome. (This doesn’t mean they have identified specific genetic variants but may be a step towards doing so).

Let’s explore what the data in this case really show and really mean. A variety of conclusions can be drawn from this and previous studies:

1. Differences in sexual orientation are partly attributable to genetic differences.

2. Sexual orientation in males and females is controlled by distinct sets of genes. (Dizygotic twins of opposite sex show no increased similarity in sexual orientation compared to unrelated people – if a female twin is gay, there is no increased likelihood that her twin brother will be too, and vice versa).

3. Male sexual orientation is rather more strongly heritable than female.

4. The shared family environment has no effect on male sexual orientation but may have a small effect on female sexual orientation.

5. There must also be non-genetic factors influencing this trait, as monozygotic twins are still often discordant (more often than concordant, in fact).

The fact that sexual orientation in males and females is influenced by distinct sets of genetic variants is interesting and leads to a fundamental insight: heterosexuality is not a single default state. It emerges from distinct biological processes that actively match the brain circuitry of (i) males or (ii) females to their chromosomal andgonadal sex so that most inpiduals who carry a Y chromosome are attracted to females and most people who do not are attracted to males.

What is being regulated, biologically, is not sexual orientation (whether you are attracted to people of the same or opposite sex), but sexual preference (whether you are attracted to males or females). Given how complex the processes of sexual differentiation of the brain are (involving the actions of many different genes), it is not surprising that they can sometimes be impaired due to variation in those genes, leading to a failure to match sexual preference to chromosomal sex. Indeed, we know of many specific mutations that can lead to exactly such effects in other mammals – it would be surprising if similar events did not occur in humans.

These studies are consistent with the idea that sexual preference is a biological trait – an innate characteristic of an inpidual, not strongly affected by experience or family upbringing. Not a choice, in other words. We didn’t need genetics to tell us that – personal experience does just fine for most people. But this kind of evidence becomes important when some places in the world (like Uganda, recently) appeal to science to claim (wrongly) that there is evidence that homosexuality is an active choice and use that claim directly to justify criminalisation of homosexual behaviour.

Importantly, the fact that sexual orientation is only partly heritable does not at all undermine the conclusion that it is a completely biological trait. Just because monozygotic twins are not always concordant for sexual orientation does not mean the trait is not completely innate. Typically, geneticists use the term “non-shared environmental variance” to refer to factors that influence a trait outside of shared genes or shared family environment. The non-shared environment term encompasses those effects that explain why monozygotic twins are actually less than identical for many traits (reflecting additional factors that contribute to variance in the trait across the population generally).

The terminology is rather unfortunate because “environmental” does not have its normal colloquial meaning in this context. It does not necessarily mean that some experience that an inpidual has influences their phenotype. Firstly, it encompasses measurement error (just the difficulty in accurately measuring the trait, which is particularly important for behavioural traits). Secondly, it includes environmental effects prior to birth (in utero), which may be especially important for brain development. And finally, it also includes chance or noise – in this case, intrinsic developmental variation that can have dramatic effects on the end-state or outcome of brain development. This process is incredibly complex and noisy, in engineering terms, and the outcome is, like baking a cake, never the same twice. By the time they are born (when the buns come out of the oven), the brains of monozygotic twins are already highly unique.

Genetic differences may thus change the probability of an outcome over many instances, without determining the specific outcome in any inpidual. 

A useful analogy is to handedness. Handedness is only moderately heritable but is effectively completely innate or intrinsic to the inpidual. This is true even though the preference for using one hand over the other emerges only over time. The harsh experiences of many in the past who were forced (sometimes with deeply cruel and painful methods) to write with their right hands because left-handedness was seen as aberrant – even sinful – attest to the fact that the innate preference cannot readily be overridden. All the evidence suggests this is also the case for sexual preference.

What about concerns that these findings could be used as justification for labelling homosexuality a disorder? These are probably somewhat justified – no doubt some people will use it like that. And that places a responsibility on geneticists to explain that just because something is caused by genetic variants – i.e., mutations – does not mean it necessarily should be considered a disorder. We don’t consider red hair a disorder, or blue eyes, or pale skin, or – any longer – left-handedness. All of those are caused by mutations.

The word mutation is rather loaded, but in truth we are all mutants. Each of us carries hundreds of thousands of genetic variants, andhundreds of those are rare, serious mutations that affect the function of some protein. Many of those cause some kind of difference to our phenotype (the outward expression of our genotype). But a difference is only considered a disorder if it negatively impacts on someone’s life. And homosexuality is only a disorder if society makes it one.

Link: The Mental Life of Plants and Worms, Among Others

We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life. And yet, Darwin insisted, they were closer than one might think.

Charles Darwin’s last book, published in 1881, was a study of the humble earthworm. His main theme—expressed in the title, The Formation of Vegetable Mould through the Action of Worms—was the immense power of worms, in vast numbers and over millions of years, to till the soil and change the face of the earth. But his opening chapters are devoted more simply to the “habits” of worms.

Worms can distinguish between light and dark, and they generally stay underground, safe from predators, during daylight hours. They have no ears, but if they are deaf to aerial vibration, they are exceedingly sensitive to vibrations conducted through the earth, as might be generated by the footsteps of approaching animals. All of these sensations, Darwin noted, are transmitted to collections of nerve cells (he called them “the cerebral ganglia”) in the worm’s head.

“When a worm is suddenly illuminated,” Darwin wrote, it “dashes like a rabbit into its burrow.” He noted that he was “at first led to look at the action as a reflex one,” but then observed that this behavior could be modified—for instance, when a worm was otherwise engaged, it showed no withdrawal with sudden exposure to light.

For Darwin, the ability to modulate responses indicated “the presence of a mind of some kind.” He also wrote of the “mental qualities” of worms in relation to their plugging up their burrows, noting that “if worms are able to judge…having drawn an object close to the mouths of their burrows, how best to drag it in, they must acquire some notion of its general shape.” This moved him to argue that worms “deserve to be called intelligent, for they then act in nearly the same manner as a man under similar circumstances.”

As a boy, I played with the earthworms in our garden (and later used them in research projects), but my true love was for the seashore, and especially tidal pools, for we nearly always took our summer holidays at the seaside. This early, lyrical feeling for the beauty of simple sea creatures became more scientific under the influence of a biology teacher at school and our annual visits with him to the Marine Station at Millport in southwest Scotland, where we could investigate the immense range of invertebrate animals on the seashores of Cumbrae. I was so excited by these Millport visits that I thought I would like to become a marine biologist myself.

If Darwin’s book on earthworms was a favorite of mine, so too was George John Romanes’s 1885 book Jelly-Fish, Star-Fish, and Sea-Urchins: Being a Research on Primitive Nervous Systems, with its simple, fascinating experiments and beautiful illustrations. For Romanes, Darwin’s young friend and student, the seashore and its fauna were to be passionate and lifelong interests, and his aim above all was to investigate what he regarded as the behavioral manifestations of “mind” in these creatures.

I was charmed by Romanes’s personal style. (His studies of invertebrate minds and nervous systems were most happily pursued, he wrote, in “a laboratory set up upon the sea-beach…a neat little wooden workshop thrown open to the sea-breezes.”) But it was clear that correlating the neural and the behavioral was at the heart of Romanes’s enterprise. He spoke of his work as “comparative psychology,” and saw it as analogous to comparative anatomy.

Louis Agassiz had shown, as early as 1850, that the jellyfish Bougainvillea had a substantial nervous system, and by 1883 Romanes demonstrated its individual nerve cells (there are about a thousand). By simple experiments—cutting certain nerves, making incisions in the bell, or looking at isolated slices of tissue—he showed that jellyfish employed both autonomous, local mechanisms (dependent on nerve “nets”) and centrally coordinated activities through the circular “brain” that ran along the margins of the bell.

By 1883, Romanes was able to include drawings of individual nerve cells and clusters of nerve cells, or ganglia, in his book Mental Evolution in Animals. “Throughout the animal kingdom,” Romanes wrote,

nerve tissue is invariably present in all species whose zoological position is not below that of the Hydrozoa. The lowest animals in which it has hitherto been detected are the Medusae, or jelly-fishes, and from them upwards its occurrence is, as I have said, invariable. Wherever it does occur its fundamental structure is very much the same, so that whether we meet with nerve-tissue in a jelly-fish, an oyster, an insect, a bird, or a man, we have no difficulty in recognizing its structural units as everywhere more or less similar.

At the same time that Romanes was vivisecting jellyfish and starfish in his seaside laboratory, the young Sigmund Freud, already a passionate Darwinian, was working in the lab of Ernst Brücke, a physiologist in Vienna. His special concern was to compare the nerve cells of vertebrates and invertebrates, in particular those of a very primitive vertebrate (Petromyzon, a lamprey) with those of an invertebrate (a crayfish). While it was widely held at the time that the nerve elements in invertebrate nervous systems were radically different from those of vertebrate ones, Freud was able to show and illustrate, in meticulous, beautiful drawings, that the nerve cells in crayfish were basically similar to those of lampreys—or human beings.

And he grasped, as no one had before, that the nerve cell body and its processes—dendrites and axons—constituted the basic building blocks and the signaling units of the nervous system. Eric Kandel, in his book In Search of Memory: The Emergence of a New Science of Mind (2006), speculates that if Freud had stayed in basic research instead of going into medicine, perhaps he would be known today as “a co-founder of the neuron doctrine, instead of as the father of psychoanalysis.”

Although neurons may differ in shape and size, they are essentially the same from the most primitive animal life to the most advanced. It is their number and organization that differ: we have a hundred billion nerve cells, while a jellyfish has a thousand. But their status as cells capable of rapid and repetitive firingis essentially the same.

The crucial role of synapses—the junctions between neurons where nerve impulses can be modulated, giving organisms flexibility and a whole range of behaviors—was clarified only at the close of the nineteenth century by the great Spanish anatomist Santiago Ramón y Cajal, who looked at the nervous systems of many vertebrates and invertebrates, and by C.S. Sherrington in England (it was Sherrington who coined the word “synapse” and showed that synapses could be excitatory or inhibitory in function).

In the 1880s, however, despite Agassiz’s and Romanes’s work, there was still a general feeling that jellyfish were little more than passively floating masses of tentacles ready to sting and ingest whatever came their way, little more than a sort of floating marine sundew.

But jellyfish are hardly passive. They pulsate rhythmically, contracting every part of their bell simultaneously, and this requires a central pacemaker system that sets off each pulse. Jellyfish can change direction and depth, and many have a “fishing” behavior that involves turning upside down for a minute, spreading their tentacles like a net, and then righting themselves, which they do by virtue of eight gravity-sensing balance organs. (If these are removed, the jellyfish is disoriented and can no longer control its position in the water.) If bitten by a fish, or otherwise threatened, jellyfish have an escape strategy—a series of rapid, powerful pulsations of the bell—that shoots them out of harm’s way; special, oversized (and therefore rapidly responding) neurons are activated at such times.

Of special interest and infamous reputation among divers is the box jellyfish (Cubomedusae)—one of the most primitive animals to have fully developed image-forming eyes, not so different from our own. The biologist Tim Flannery, in an article in these pages, writes of box jellyfish:

They are active hunters of medium-sized fish and crustaceans, and can move at up to twenty-one feet per minute. They are also the only jellyfish with eyes that are quite sophisticated, containing retinas, corneas, and lenses. And they have brains, which are capable of learning, memory, and guiding complex behaviors.1

We and all higher animals are bilaterally symmetrical, have a front end (a head) containing a brain, and a preferred direction of movement (forward). The jellyfish nervous system, like the animal itself, is radially symmetrical and may seem less sophisticated than a mammalian brain, but it has every right to be considered a brain, generating, as it does, complex adaptive behaviors and coordinating all the animal’s sensory and motor mechanisms. Whether we can speak of a “mind” here (as Darwin does in regard to earthworms) depends on how one defines “mind.”

We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life.

And yet, Darwin insisted, they were closer than one might think. He wrote a series of botanical books, culminating in The Power of Movement in Plants (1880), just before his book on earthworms. He thought the powers of movement, and especially of detecting and catching prey, in the insectivorous plants so remarkable that, in a letter to the botanist Asa Gray, he referred to Drosera, the sundew, only half-jokingly as not only a wonderful plant but “a most sagacious animal.”

Darwin was reinforced in this notion by the demonstration that insect-eating plants made use of electrical currents to move, just as animals did—that there was “plant electricity” as well as “animal electricity.” But “plant electricity” moves slowly, roughly an inch a second, as one can see by watching the leaflets of the sensitive plant (Mimosa pudica) closing one by one along a leaf that is touched. “Animal electricity,” conducted by nerves, moves roughly a thousand times faster.2

Signaling between cells depends on electrochemical changes, the flow of electrically charged atoms (ions), in and out of cells via special, highly selective molecular pores or “channels.” These ion flows cause electrical currents, impulses—action potentials—that are transmitted (directly or indirectly) from one cell to another, in both plants and animals.

Plants depend largely on calcium ion channels, which suit their relatively slow lives perfectly. As Daniel Chamovitz argues in his book What a Plant Knows (2012), plants are capable of registering what we would call sights, sounds, tactile signals, and much more. Plants know what to do, and they “remember.” But without neurons, plants do not learn in the same way that animals do; instead they rely on a vast arsenal of different chemicals and what Darwin termed “devices.” The blueprints for these must all be encoded in the plant’s genome, and indeed plant genomes are often larger than our own.

The calcium ion channels that plants rely on do not support rapid or repetitive signaling between cells; once a plant action potential is generated, it cannot be repeated at a fast enough rate to allow, for example, the speed with which a worm “dashes…into its burrow.” Speed requires ions and ion channels that can open and close in a matter of milliseconds, allowing hundreds of action potentials to be generated in a second. The magic ions, here, are sodium and potassium ions, which enabled the development of rapidly reacting muscle cells, nerve cells, and neuromodulation at synapses. These made possible organisms that could learn, profit by experience, judge, act, and finally think.

This new form of life—animal life—emerging perhaps 600 million years ago conferred great advantages, and transformed populations rapidly. In the so-called Cambrian explosion (datable with remarkable precision to 542 million years ago), a dozen or more new phyla, each with very different body plans, arose within the space of a million years or less—a geological eye-blink. The once peaceful pre-Cambrian seas were transformed into a jungle of hunters and hunted, newly mobile. And while some animals (like sponges) lost their nerve cells and regressed to a vegetative life, others, especially predators, evolved increasingly sophisticated sense organs, memories, and minds.

Link: Hell on Earth

At the University of Oxford, a team of scholars led by the philosopher Rebecca Roache has begun thinking about the ways futuristic technologies might transform punishment. In January, I spoke with Roache and her colleagues Anders Sandberg and Hannah Maslen about emotional enhancement, ‘supercrimes’, and the ethics of eternal damnation. What follows is a condensed and edited transcript of our conversation.

Suppose we develop the ability to radically expand the human lifespan, so that people are regularly living for more than 500 years. Would that allow judges to fit punishments to crimes more precisely?

Roache: When I began researching this topic, I was thinking a lot about Daniel Pelka, a four-year-old boy who was starved and beaten to death [in 2012] by his mother and stepfather here in the UK. I had wondered whether the best way to achieve justice in cases like that was to prolong death as long as possible. Some crimes are so bad they require a really long period of punishment, and a lot of people seem to get out of that punishment by dying. And so I thought, why not make prison sentences for particularly odious criminals worse by extending their lives?

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

The life-extension scenario may sound futuristic, but if you look closely you can already see it in action, as people begin to live longer lives than before. If you look at the enormous prison population in the US, you find an astronomical number of elderly prisoners, including quite a few with pacemakers. When I went digging around in medical journals, I found all these interesting papers about the treatment of pacemaker patients in prison.

Suppose prisons become more humane in the future, so that they resemble Norwegian prisons instead of those you see in America or North Korea. Is it possible that correctional facilities could become truly correctional in the age of long lifespans, by taking a more sustained approach to rehabilitation?

Roache: If people could live for centuries or millennia, you would obviously have more time to reform them, but you would also run into a tricky philosophical issue having to do with personal identity. A lot of philosophers who have written about personal identity wonder whether identity can be sustained over an extremely long lifespan. Even if your body makes it to 1,000 years, the thinking goes, that body is actually inhabited by a succession of persons over time rather than a single continuous person. And so, if you put someone in prison for a crime they committed at 40, they might, strictly speaking, be an entirely different person at 940. And that means you are effectively punishing one person for a crime committed by someone else. Most of us would think that unjust.

Let’s say that life expansion therapies become a normal part of the human condition, so that it’s not just elites who have access to them, it’s everyone. At what point would it become unethical to withhold these therapies from prisoners?

Roache: In that situation it would probably be inappropriate to view them as an enhancement, or something extra. If these therapies were truly universal, it’s more likely that people would come to think of them as life-saving technologies. And if you withheld them from prisoners in that scenario, you would effectively be denying them medical treatment, and today we consider that inhumane. My personal suspicion is that once life extension becomes more or less universal, people will begin to see it as a positive right, like health care in most industrialised nations today. Indeed, it’s interesting to note that in the US, prisoners sometimes receive better health care than uninsured people. You have to wonder about the incentives a system like that creates.

Where is that threshold of universality, where access to something becomes a positive right? Do we have an empirical example of it?

Roache: One interesting case might be internet access. In Finland, for instance, access to communication technology is considered a human right and handwritten letters are not sufficient to satisfy it. Finnish prisons are required to give inmates access to computers, although their internet activity is closely monitored. This is an interesting development because, for years, limiting access to computers was a common condition of probation in hacking cases – and that meant all kinds of computers, including ATMs [cash points]. In the 1980s, that lifestyle might have been possible, and you could also see pulling it off in the ’90s, though it would have been very difficult. But today computers are ubiquitous, and a normal life seems impossible without them; you can’t even access the subway without interacting with a computer of some sort.

In the late 1990s, an American hacker named Kevin Mitnick was denied all access to communication technology after law enforcement officials [in California] claimed he could ‘start a nuclear war by whistling into a pay phone’. But in the end, he got the ruling overturned by arguing that it prevented him from living a normal life.

What about life expansion that meddles with a person’s perception of time? Take someone convicted of a heinous crime, like the torture and murder of a child. Would it be unethical to tinker with the brain so that this person experiences a 1,000-year jail sentence in his or her mind?

Roache: There are a number of psychoactive drugs that distort people’s sense of time, so you could imagine developing a pill or a liquid that made someone feel like they were serving a 1,000-year sentence. Of course, there is a widely held view that any amount of tinkering with a person’s brain is unacceptably invasive. But you might not need to interfere with the brain directly. There is a long history of using the prison environment itself to affect prisoners’ subjective experience. During the Spanish Civil War [in the 1930s] there was actually a prison where modern art was used to make the environment aesthetically unpleasant. Also, prison cells themselves have been designed to make them more claustrophobic, and some prison beds are specifically made to be uncomfortable.

I haven’t found any specific cases of time dilation being used in prisons, but time distortion is a technique that is sometimes used in interrogation, where people are exposed to constant light, or unusual light fluctuations, so that they can’t tell what time of day it is. But in that case it’s not being used as a punishment, per se, it’s being used to break people’s sense of reality so that they become more dependent on the interrogator, and more pliable as a result. In that sense, a time-slowing pill would be a pretty radical innovation in the history of penal technology.

I want to ask you a question that has some crossover with theological debates about hell. Suppose we eventually learn to put off death indefinitely, and that we extend this treatment to prisoners. Is there any crime that would justify eternal imprisonment? Take Hitler as a test case. Say the Soviets had gotten to the bunker before he killed himself, and say capital punishment was out of the question – would we have put him behind bars forever?

Roache: It’s tough to say. If you start out with the premise that a punishment should be proportional to the crime, it’s difficult to think of a crime that could justify eternal imprisonment. You could imagine giving Hitler one term of life imprisonment for every person killed in the Second World War. That would make for quite a long sentence, but it would still be finite. The endangerment of mankind as a whole might qualify as a sufficiently serious crime to warrant it. As you know, a great deal of the research we do here at the Oxford Martin School concerns existential risk. Suppose there was some physics experiment that stood a decent chance of generating a black hole that could destroy the planet and all future generations. If someone deliberately set up an experiment like that, I could see that being the kind of supercrime that would justify an eternal sentence.

In your forthcoming paper on this subject, you mention the possibility that convicts with a neurologically stunted capacity for empathy might one day be ‘emotionally enhanced’, and that the remorse felt by these newly empathetic criminals could be the toughest form of punishment around. Do you think a full moral reckoning with an awful crime the most potent form of suffering an individual can endure?

Roache: I’m not sure. Obviously, it’s an empirical question as to which feels worse, genuine remorse or time in prison. There is certainly reason to take the claim seriously. For instance, in literature and folk wisdom, you often hear people saying things like, ‘The worst thing is I’ll have to live with myself.’ My own intuition is that for very serious crimes, genuine remorse could be subjectively worse than a prison sentence. But I doubt that’s the case for less serious crimes, where remorse isn’t even necessarily appropriate – like if you are wailing and beating yourself up for stealing a candy bar or something like that.

I remember watching a movie in school, about a teen that killed another teen in a drunk-driving accident. As one of the conditions of his probation, the judge in the case required him to mail a daily cheque for 25 cents to the parents of the teen he’d killed for a period of 10 years. Two years in, the teen was begging the judge to throw him in jail, just to avoid the daily reminder.

Roache: That’s an interesting case where prison is actually an escape from remorse, which is strange because one of the justifications for prison is that it’s supposed to focus your mind on what you have done wrong. Presumably, every day you wake up in prison, you ask yourself why you are there, right?

What if these emotional enhancements proved too effective? Suppose they are so powerful, they turn psychopaths into Zen masters who live in a constant state of deep, reflective contentment. Should that trouble us? Is mental suffering a necessary component of imprisonment?

Roache: There is a long-standing philosophical question as to how bad the prison experience should be. Retributivists, those who think the point of prisons is to punish, tend to think that it should be quite unpleasant, whereas consequentialists tend to be more concerned with a prison’s reformative effects, and its larger social costs. There are a number of prisons that offer prisoners constructive activities to participate in, including sports leagues, art classes, and even yoga. That practice seems to reflect the view that confinement, or the deprivation of liberty, is itself enough of a punishment. Of course, even for consequentialists, there has to be some level of suffering involved in punishment, because consequentialists are very concerned about deterrence.

I wanted to close by moving beyond imprisonment, to ask you about the future of punishment more broadly. Are there any alternative punishments that technology might enable, and that you can see on the horizon now? What surprising things might we see down the line?

Roache: We have been thinking a lot about surveillance and punishment lately. Already, we see governments using ankle bracelets to track people in various ways, and many of them are fairly elaborate. For instance, some of these devices allow you to commute to work, but they also give you a curfew and keep a close eye on your location. You can imagine this being refined further, so that your ankle bracelet bans you from entering establishments that sell alcohol. This could be used to punish people who happen to like going to pubs, or it could be used to reform severe alcoholics. Either way, technologies of this sort seem to be edging up to a level of behaviour control that makes some people uneasy, due to questions about personal autonomy.

It’s one thing to lose your personal liberty as a result of being confined in a prison, but you are still allowed to believe whatever you want while you are in there. In the UK, for instance, you cannot withhold religious manuscripts from a prisoner unless you have a very good reason. These concerns about autonomy become particularly potent when you start talking about brain implants that could potentially control behaviour directly. The classic example is Robert G Heath [a psychiatrist at Tulane University in New Orleans], who did this famously creepy experiment [in the 1950s] using electrodes in the brain in an attempt to modify behaviour in people who were prone to violent psychosis. The electrodes were ostensibly being used to treat the patients, but he was also, rather gleefully, trying to move them in a socially approved direction. You can really see that in his infamous [1972] paper on ‘curing’ homosexuals. I think most Western societies would say ‘no thanks’ to that kind of punishment.

To me, these questions about technology are interesting because they force us to rethink the truisms we currently hold about punishment. When we ask ourselves whether it’s inhumane to inflict a certain technology on someone, we have to make sure it’s not just the unfamiliarity that spooks us. And more importantly, we have to ask ourselves whether punishments like imprisonment are only considered humane because they are familiar, because we’ve all grown up in a world where imprisonment is what happens to people who commit crimes. Is it really OK to lock someone up for the best part of the only life they will ever have, or might it be more humane to tinker with their brains and set them free? When we ask that question, the goal isn’t simply to imagine a bunch of futuristic punishments – the goal is to look at today’s punishments through the lens of the future.

Link: David Graeber: What’s the Point If We Can’t Have Fun?

My friend June Thunderstorm and I once spent a half an hour sitting in a meadow by a mountain lake, watching an inchworm dangle from the top of a stalk of grass, twist about in every possible direction, and then leap to the next stalk and do the same thing. And so it proceeded, in a vast circle, with what must have been a vast expenditure of energy, for what seemed like absolutely no reason at all.

“All animals play,” June had once said to me. “Even ants.” She’d spent many years working as a professional gardener and had plenty of incidents like this to observe and ponder. “Look,” she said, with an air of modest triumph. “See what I mean?”

Most of us, hearing this story, would insist on proof. How do we know the worm was playing? Perhaps the invisible circles it traced in the air were really just a search for some unknown sort of prey. Or a mating ritual. Can we prove they weren’t? Even if the worm was playing, how do we know this form of play did not serve some ultimately practical purpose: exercise, or self-training for some possible future inchworm emergency?

This would be the reaction of most professional ethologists as well. Generally speaking, an analysis of animal behavior is not considered scientific unless the animal is assumed, at least tacitly, to be operating according to the same means/end calculations that one would apply to economic transactions. Under this assumption, an expenditure of energy must be directed toward some goal, whether it be obtaining food, securing territory, achieving dominance, or maximizing reproductive success—unless one can absolutely prove that it isn’t, and absolute proof in such matters is, as one might imagine, very hard to come by.

I must emphasize here that it doesn’t really matter what sort of theory of animal motivation a scientist might entertain: what she believes an animal to be thinking, whether she thinks an animal can be said to be “thinking” anything at all. I’m not saying that ethologists actually believe that animals are simply rational calculating machines. I’m simply saying that ethologists have boxed themselves into a world where to be scientific means to offer an explanation of behavior in rational terms—which in turn means describing an animal as if it were a calculating economic actor trying to maximize some sort of self-interest—whatever their theory of animal psychology, or motivation, might be.

That’s why the existence of animal play is considered something of an intellectual scandal. It’s understudied, and those who do study it are seen as mildly eccentric. As with many vaguely threatening, speculative notions, difficult-to-satisfy criteria are introduced for proving animal play exists, and even when it is acknowledged, the research more often than not cannibalizes its own insights by trying to demonstrate that play must have some long-term survival or reproductive function.

Despite all this, those who do look into the matter are invariably forced to the conclusion that play does exist across the animal universe. And exists not just among such notoriously frivolous creatures as monkeys, dolphins, or puppies, but among such unlikely species as frogs, minnows, salamanders, fiddler crabs, and yes, even ants—which not only engage in frivolous activities as individuals, but also have been observed since the nineteenth century to arrange mock-wars, apparently just for the fun of it.

Why do animals play? Well, why shouldn’t they? The real question is: Why does the existence of action carried out for the sheer pleasure of acting, the exertion of powers for the sheer pleasure of exerting them, strike us as mysterious? What does it tell us about ourselves that we instinctively assume that it is?

Survival of the Misfits

The tendency in popular thought to view the biological world in economic terms was present at the nineteenth-century beginnings of Darwinian science. Charles Darwin, after all, borrowed the term “survival of the fittest” from the sociologist Herbert Spencer, that darling of robber barons. Spencer, in turn, was struck by how much the forces driving natural selection in On the Origin of Species jibed with his own laissez-faire economic theories. Competition over resources, rational calculation of advantage, and the gradual extinction of the weak were taken to be the prime directives of the universe.

The stakes of this new view of nature as the theater for a brutal struggle for existence were high, and objections registered very early on. An alternative school of Darwinism emerged in Russia emphasizing cooperation, not competition, as the driver of evolutionary change. In 1902 this approach found a voice in a popular book, Mutual Aid: A Factor of Evolution, by naturalist and revolutionary anarchist pamphleteer Peter Kropotkin. In an explicit riposte to social Darwinists, Kropotkin argued that the entire theoretical basis for Social Darwinism was wrong: those species that cooperate most effectively tend to be the most competitive in the long run. Kropotkin, born a prince (he renounced his title as a young man), spent many years in Siberia as a naturalist and explorer before being imprisoned for revolutionary agitation, escaping, and fleeing to London. Mutual Aid grew from a series of essays written in response to Thomas Henry Huxley, a well-known Social Darwinist, and summarized the Russian understanding of the day, which was that while competition was undoubtedly one factor driving both natural and social evolution, the role of cooperation was ultimately decisive.

The Russian challenge was taken quite seriously in twentieth-century biology—particularly among the emerging subdiscipline of evolutionary psychology—even if it was rarely mentioned by name. It came, instead, to be subsumed under the broader “problem of altruism”—another phrase borrowed from the economists, and one that spills over into arguments among “rational choice” theorists in the social sciences. This was the question that already troubled Darwin: Why should animals ever sacrifice their individual advantage for others? Because no one can deny that they sometimes do. Why should a herd animal draw potentially lethal attention to himself by alerting his fellows a predator is coming? Why should worker bees kill themselves to protect their hive? If to advance a scientific explanation of any behavior means to attribute rational, maximizing motives, then what, precisely, was a kamikaze bee trying to maximize?

We all know the eventual answer, which the discovery of genes made possible. Animals were simply trying to maximize the propagation of their own genetic codes. Curiously, this view—which eventually came to be referred to as neo-Darwinian—was developed largely by figures who considered themselves radicals of one sort or another. Jack Haldane, a Marxist biologist, was already trying to annoy moralists in the 1930s by quipping that, like any biological entity, he’d be happy to sacrifice his life for “two brothers or eight cousins.” The epitome of this line of thought came with militant atheist Richard Dawkins’s book The Selfish Gene—a work that insisted all biological entities were best conceived of as “lumbering robots,” programmed by genetic codes that, for some reason no one could quite explain, acted like “successful Chicago gangsters,” ruthlessly expanding their territory in an endless desire to propagate themselves. Such descriptions were typically qualified by remarks like, “Of course, this is just a metaphor, genes don’treally want or do anything.” But in reality, the neo-Darwinists were practically driven to their conclusions by their initial assumption: that science demands a rational explanation, that this means attributing rational motives to all behavior, and that a truly rational motivation can only be one that, if observed in humans, would normally be described as selfishness or greed. As a result, the neo-Darwinists went even further than the Victorian variety. If old-school Social Darwinists like Herbert Spencer viewed nature as a marketplace, albeit an unusually cutthroat one, the new version was outright capitalist. The neo-Darwinists assumed not just a struggle for survival, but a universe of rational calculation driven by an apparently irrational imperative to unlimited growth.

This, anyway, is how the Russian challenge was understood. Kropotkin’s actual argument is far more interesting. Much of it, for instance, is concerned with how animal cooperation often has nothing to do with survival or reproduction, but is a form of pleasure in itself. “To take flight in flocks merely for pleasure is quite common among all sorts of birds,” he writes. Kropotkin multiplies examples of social play: pairs of vultures wheeling about for their own entertainment, hares so keen to box with other species that they occasionally (and unwisely) approach foxes, flocks of birds performing military-style maneuvers, bands of squirrels coming together for wrestling and similar games:

We know at the present time that all animals, beginning with the ants, going on to the birds, and ending with the highest mammals, are fond of plays, wrestling, running after each other, trying to capture each other, teasing each other, and so on. And while many plays are, so to speak, a school for the proper behavior of the young in mature life, there are others which, apart from their utilitarian purposes, are, together with dancing and singing, mere manifestations of an excess of forces—“the joy of life,” and a desire to communicate in some way or another with other individuals of the same or of other species—in short, a manifestation of sociability proper, which is a distinctive feature of all the animal world.

To exercise one’s capacities to their fullest extent is to take pleasure in one’s own existence, and with sociable creatures, such pleasures are proportionally magnified when performed in company. From the Russian perspective, this does not need to be explained. It is simply what life is. We don’t have to explain why creatures desire to be alive. Life is an end in itself. And if what being alive actually consists of is having powers—to run, jump, fight, fly through the air—then surely the exercise of such powers as an end in itself does not have to be explained either. It’s just an extension of the same principle.

Friedrich Schiller had already argued in 1795 that it was precisely in play that we find the origins of self-consciousness, and hence freedom, and hence morality. “Man plays only when he is in the full sense of the word a man,” Schiller wrote in his On the Aesthetic Education of Man, “and he is only wholly a Man when he is playing.” If so, and if Kropotkin was right, then glimmers of freedom, or even of moral life, begin to appear everywhere around us.

It’s hardly surprising, then, that this aspect of Kropotkin’s argument was ignored by the neo-Darwinists. Unlike “the problem of altruism,” cooperation for pleasure, as an end in itself, simply could not be recuperated for ideological purposes. In fact, the version of the struggle for existence that emerged over the twentieth century had even less room for play than the older Victorian one. Herbert Spencer himself had no problem with the idea of animal play as purposeless, a mere enjoyment of surplus energy. Just as a successful industrialist or salesman could go home and play a nice game of cribbage or polo, why should those animals that succeeded in the struggle for existence not also have a bit of fun? But in the new full-blown capitalist version of evolution, where the drive for accumulation had no limits, life was no longer an end in itself, but a mere instrument for the propagation of DNA sequences—and so the very existence of play was something of a scandal.

Why Me?

It’s not just that scientists are reluctant to set out on a path that might lead them to see play—and therefore the seeds of self-consciousness, freedom, and moral life—among animals. Many are finding it increasingly difficult to come up with justifications for ascribing any of these things even to human beings. Once you reduce all living beings to the equivalent of market actors, rational calculating machines trying to propagate their genetic code, you accept that not only the cells that make up our bodies, but whatever beings are our immediate ancestors, lacked anything even remotely like self-consciousness, freedom, or moral life—which makes it hard to understand how or why consciousness (a mind, a soul) could ever have evolved in the first place.

American philosopher Daniel Dennett frames the problem quite lucidly. Take lobsters, he argues—they’re just robots. Lobsters can get by with no sense of self at all. You can’t ask what it’s like to be a lobster. It’s not like anything. They have nothing that even resembles consciousness; they’re machines. But if this is so, Dennett argues, then the same must be assumed all the way up the evolutionary scale of complexity, from the living cells that make up our bodies to such elaborate creatures as monkeys and elephants, who, for all their apparently human-like qualities, cannot be proved to think about what they do. That is, until suddenly, Dennett gets to humans, which—while they are certainly gliding around on autopilot at least 95 percent of the time—nonetheless do appear to have this “me,” this conscious self grafted on top of them, that occasionally shows up to take supervisory notice, intervening to tell the system to look for a new job, quit smoking, or write an academic paper about the origins of consciousness. In Dennett’s formulation,

Yes, we have a soul. But it’s made of lots of tiny robots. Somehow, the trillions of robotic (and unconscious) cells that compose our bodies organize themselves into interacting systems that sustain the activities traditionally allocated to the soul, the ego or self. But since we have already granted that simple robots are unconscious (if toasters and thermostats and telephones are unconscious), why couldn’t teams of such robots do their fancier projects without having to compose me? If the immune system has a mind of its own, and the hand–eye coordination circuit that picks berries has a mind of its own, why bother making a super-mind to supervise all this?

Dennett’s own answer is not particularly convincing: he suggests we develop consciousness so we can lie, which gives us an evolutionary advantage. (If so, wouldn’t foxes also be conscious?) But the question grows more difficult by an order of magnitude when you ask how it happens—the “hard problem of consciousness,” as David Chalmers calls it. How do apparently robotic cells and systems combine in such a way as to have qualitative experiences: to feel dampness, savor wine, adore cumbia but be indifferent to salsa? Some scientists are honest enough to admit they don’t have the slightest idea how to account for experiences like these, and suspect they never will.

Link: Life as a Nonviolent Psychopath

In 2005, James Fallon’s life started to resemble the plot of a well-honed joke or big-screen thriller: A neuroscientist is working in his laboratory one day when he thinks he has stumbled upon a big mistake. He is researching Alzheimer’s and using his healthy family members’ brain scans as a control, while simultaneously reviewing the fMRIs of murderous psychopaths for a side project. It appears, though, that one of the killers’ scans has been shuffled into the wrong batch.

The scans are anonymously labeled, so the researcher has a technician break the code to identify the individual in his family, and place his or her scan in its proper place. When he sees the results, however, Fallon immediately orders the technician to double check the code. But no mistake has been made: The brain scan that mirrors those of the psychopaths is his own.

After discovering that he had the brain of a psychopath, Fallon delved into his family tree and spoke with experts, colleagues, relatives, and friends to see if his behavior matched up with the imaging in front of him. He not only learned that few people were surprised at the outcome, but that the boundary separating him from dangerous criminals was less determinate than he presumed. Fallon wrote about his research and findings in the book The Psychopath Inside: A Neuroscientist’s Personal Journey Into the Dark Side of the Brain, and we spoke about the idea of nature versus nurture, and what—if anything—can be done for people whose biology might betray their behavior.


One of the first things you talk about in your book is the often unrealistic or ridiculous ways that psychopaths are portrayed in film and television. Why did you decide to share your story and risk being lumped in with all of that?

I’m a basic neuroscientist—stem cells, growth factors, imaging genetics—that sort of thing. When I found out about my scan, I kind of let it go after I saw that the rest of my family’s were quite normal. I was worried about Alzheimer’s, especially along my wife’s side, and we were concerned about our kids and grandkids. Then my lab was busy doing gene discovery for schizophrenia and Alzheimer’s and launching a biotech start-up from our research on adult stem cells. We won an award and I was so involved with other things that I didn’t actually look at my results for a couple of years.

This personal experience really had me look into a field that I was only tangentially related to, and burnished into my mind the importance of genes and the environment on a molecular level. For specific genes, those interactions can really explain behavior. And what is hidden under my personal story is a discussion about the effect of bullying, abuse, and street violence on kids.

You used to believe that people were roughly 80 percent the result of genetics, and 20 percent the result of their environment. How did this discovery cause a shift in your thinking?

I went into this with the bias of a scientist who believed, for many years, that genetics were very, very dominant in who people are—that your genes would tell you who you were going to be. It’s not that I no longer think that biology, which includes genetics, is a major determinant; I just never knew how profoundly an early environment could affect somebody.

While I was writing this book, my mother started to tell me more things about myself. She said she had never told me or my father how weird I was at certain points in my youth, even though I was a happy-go-lucky kind of kid. And as I was growing up, people all throughout my life said I could be some kind of gang leader or Mafioso don because of certain behavior. Some parents forbade their children from hanging out with me. They’d wonder how I turned out so well—a family guy, successful, professional, never been to jail and all that.

I asked everybody that I knew, including psychiatrists and geneticists that have known me for a long time, and knew my bad behavior, what they thought. They went through very specific things that I had done over the years and said, “That’s psychopathic.” I asked them why they didn’t tell me and they said, “We did tell you. We’ve all been telling you.” I argued that they had called me “crazy,” and they all said, “No. We said you’re psychopathic.”

I found out that I happened to have a series of genetic alleles, “warrior genes,” that had to do with serotonin and were thought to be at risk for aggression, violence, and low emotional and interpersonal empathy—if you’re raised in an abusive environment. But if you’re raised in a very positive environment, that can have the effect of offsetting the negative effects of some of the other genes.

I had some geneticists and psychiatrists who didn’t know me examine me independently, and look at the whole series of disorders I’ve had throughout my life. None of them have been severe; I’ve had the mild form of things like anxiety disorder and OCD, but it lined up with my genetics.

The scientists said, “For one, you might never have been born.” My mother had miscarried several times and there probably were some genetic errors. They also said that if I hadn’t been treated so well, I probably wouldn’t have made it out of being a teenager. I would have committed suicide or have gotten killed, because I would have been a violent guy.

How did you react to hearing all of this?

I said, “Well, I don’t care.” And they said, “That proves that you have a fair dose of psychopathy.” Scientists don’t like to be wrong, and I’m narcissistic so I hate to be wrong, but when the answer is there before you, you have to suck it up, admit it, and move on. I couldn’t.

I started reacting with narcissism, saying, “Okay, I bet I can beat this. Watch me and I’ll be better.” Then I realized my own narcissism was driving that response. If you knew me, you’d probably say, “Oh, he’s a fun guy”–or maybe, “He’s a big-mouth and a blowhard narcissist”—but I also think you’d say, “All in all, he’s interesting, and smart, and okay.” But here’s the thing—the closer to me you are, the worse it gets. Even though I have a number of very good friends, they have all ultimately told me over the past two years when I asked them—and they were consistent even though they hadn’t talked to each other—that I do things that are quite irresponsible. It’s not like I say, Go get into trouble. I say, Jump in the water with me.

What’s an example of that, and how do you come back from hurting someone in that way?

For me, because I need these buzzes, I get into dangerous situations. Years ago, when I worked at the University of Nairobi Hospital, a few doctors had told me about AIDS in the region as well as the Marburg virus. They said a guy had come in who was bleeding out of his nose and ears, and that he had been up in the Elgon, in the Kitum Caves. I thought, “Oh, that’s where the elephants go,” and I knew I had to visit. I would have gone alone, but my brother was there. I told him it was an epic trek to where the old matriarch elephants went to retrieve minerals in the caves, but I didn’t mention anything else.

When we got there, there was a lot of rebel activity on the mountain, so there was nobody in the park except for one guard. So we just went in. There were all these rare animals and it was tremendous, but also, this guy had died from Marburg after being here, and nobody knew exactly how he’d gotten it. I knew his path and followed it to see where he camped.

That night, we wrapped ourselves around a fire because there were lions and all these other animals. We were jumping around and waving sticks on fire at the animals in the absolute dark. My brother was going crazy and I joked, “I have to put my head inside of yours because I have a family and you don’t, so if a lion comes and bites one of our necks, it’s gotta be you.”

Again, I was joking around, but it was a real danger. The next day, we walked into the Kitum Caves and you could see where rocks had been knocked over by the elephants.  There was also the smell of all of this animal dung—and that’s where the guy got the Marburg; scientists didn’t know whether it was the dung or the bats.

A bit later, my brother read an article in The New Yorker about Marburg, which inspired the movieOutbreak. He asked me if I knew about it. I said, “Yeah. Wasn’t it exciting? Nobody gets to do this trip.” And he called me names and said, “Not exciting enough. We could’ve gotten Marburg; we could have gotten killed every two seconds.” All of my brothers have a lot of machismo and brio; you’ve got to be a tough guy in our family. But deep inside, I don’t think that my brother fundamentally trusts me after that. And why should he, right? To me, it was nothing.

After all of this research, I started to think of this experience as an opportunity to do something good out of being kind of a jerk my entire life. Instead of trying to fundamentally change—because it’s very difficult to change anything—I wanted to use what could be considered faults, like narcissism, to an advantage; to do something good.

What has that involved?

I started with simple things of how I interact with my wife, my sister, and my mother. Even though they’ve always been close to me, I don’t treat them all that well. I treat strangers pretty well—really well, and people tend to like me when they meet me—but I treat my family the same way, like they’re just somebody at a bar. I treat them well, but I don’t treat them in a special way. That’s the big problem.

I asked them this—it’s not something a person will tell you spontaneously—but they said, ”I give you everything. I give you all this love and you really don’t give it back.” They all said it, and that sure bothered me. So I wanted to see if I could change. I don’t believe it, but I’m going to try.

In order to do that, every time I started to do something, I had to think about it, look at it, and go: No. Don’t do the selfish thing or the self-serving thing. Step-by-step, that’s what I’ve been doing for about a year and a half and they all like it. Their basic response is: We know you don’t really mean it, but we still like it.

I told them, “You’ve got to be kidding me. You accept this? It’s phony!” And they said, “No, it’s okay. If you treat people better it means you care enough to try.” It blew me away then and still blows me away now. 

But treating everyone the same isn’t necessarily a bad thing, is it? Is it just that the people close to you want more from you?

Yes. They absolutely expect and demand more. It’s a kind of cruelty, a kind of abuse, because you’re not giving them that love. My wife to this day says it’s hard to be with me at parties because I’ve got all these people around me, and I’ll leave her or other people in the cold. She is not a selfish person, but I can see how it can really work on somebody.

Related 

I gave a talk two years ago in India at the Mumbai LitFest on personality disorders and psychopathy, and we also had a historian from Oxford talk about violence against women in terms of the brain and social development. After it was over, a woman came up to me and asked if we could talk. She was a psychiatrist but also a science writer and said, “You said that you live in a flat emotional world—that is, that you treat everybody the same. That’s Buddhist.” I don’t know anything about Buddhism but she continued on and said, “It’s too bad that the people close to you are so disappointed in being close to you. Any learned Buddhist would think this was great.” I don’t know what to do with that.

Sometimes the truth is not just that it hurts, but that it’s just so disappointing. You want to believe in romance and have romance in your life—even the most hardcore, cold intellectual wants the romantic notion. It kind of makes life worth living. But with these kinds of things, you really start thinking about what a machine it means we are—what it means that some of us don’t need those feelings, while some of us need them so much. It destroys the romantic fabric of society in a way.

So what I do, in this situation, is think: How do I treat the people in my life as if I’m their son, or their brother, or their husband? It’s about going the extra mile for them so that they know I know this is the right thing to do. I know when the situation comes up, but my gut instinct is to do something selfish. Instead, I slow down and try to think about it. It’s like dumb behavioral modification; there’s no finesse to this, but I said, well, why does there have to be finesse? I’m trying to treat it as a straightaway thing, when the situation comes up, to realize there’s a chance that I might be wrong, or reacting in a poor way, or without any sort of love—like a human.

A few years ago there was an article in The New York Times called, “Can You Call a 9-Year-Old a Psychopath?" The subject was a boy named Michael whose family was concerned about him—he’d been diagnosed with several disorders and eventually deemed a possible psychopath by Dan Waschbusch, a researcher at Florida International University who studies "callous unemotional children." Dr. Waschbusch examines these children in hopes of finding possible treatment or rehabilitation. You mentioned earlier that you don’t believe people can fundamentally change; what is your take on this research?

In the 70’s, when I was still a post-doc student and a young professor, I started working with some psychiatrists and neurologists who would tell me that they could identify a probable psychopath when he or she was only 2 or 3 years old. I asked them why they didn’t tell the parents and they said, “There’s no way I’m going to tell anybody. First of all, you can’t be sure; second of all, it could destroy the kid’s life; and third of all, the media and the whole family will be at your door with sticks and knives.” So, when Dr. Waschbusch came out two years ago, it was like, “My god. He actually said it.” This was something that all psychiatrists and neurologists in the field knew—especially if they were pediatric psychologists and had the full trajectory of a kid’s life. It can be recognized very, very early—certainly before 9-years-old—but by that time the question of how to un-ring the bell is a tough one.

My bias is that even though I work in growth factors, plasticity, memory, and learning, I think the whole idea of plasticity in adults—or really after puberty—is so overblown. No one knows if the changes that have been shown are permanent and it doesn’t count if it’s only temporary. It’s like the Mozart Effect—sure, there are studies saying there is plasticity in the brain using a sound stimulation or electrical stimulation, but talk to this person in a year or two. Has anything really changed? An entire cottage industry was made from playing Mozart to pregnant women’s abdomens. That’s how the idea of plasticity gets out of hand. I think people can change if they devote their whole life to the one thing and stop all the other parts of their life, but that’s what people can’t do. You can have behavioral plasticity and maybe change behavior with parallel brain circuitry, but the number of times this happens is really rare.

So I really still doubt plasticity. I’m trying to do it by devoting myself to this one thing—to being a nice guy to the people that are close to me—but it’s a sort of game that I’m playing with myself because I don’t really believe it can be done, and it’s a challenge.

In some ways, though, the stakes are different for you because you’re not violent—and isn’t that the concern? Relative to your own life, your attempts to change may positively impact your relationships with your friends, family, and colleagues. But in the case of possibly violent people, they may harm others.

The jump from being a “prosocial” psychopath or somebody on the edge who doesn’t act out violently, to someone who really is a real, criminal predator is not clear. For me, I think I was protected because I was brought up in an upper-middle-class, educated environment with very supportive men and women in my family. So there may be a mass convergence of genetics and environment over a long period of time. But what would happen if I lost my family or lost my job; what would I then become? That’s the test.

For people who have the fundamental biology—the genetics, the brain patterns, and that early existence of trauma—first of all, if they’re abused they’re going to be pissed off and have a sense of revenge: I don’t care what happens to the world because I’m getting even. But a real, primary psychopath doesn’t need that. They’re just predators who don’t need to be angry at all; they do these things because of some fundamental lack of connection with the human race, and with individuals, and so on.

Someone who has money, and sex, and rock and roll, and everything they want may still be psychopathic—but they may just manipulate people, or use people, and not kill them. They may hurt others, but not in a violent way. Most people care about violence—that’s the thing. People may say, “Oh, this very bad investment counselor was a psychopath”—but the essential difference in criminality between that and murder is something we all hate and we all fear. It just isn’t known if there is some ultimate trigger. 

Link: The New Revolutionaries: Climate Scientists Demand Radical Change

To prevent catastrophic climate change, Britain’s top experts call for emissions cuts that require “revolutionary change to the political and economic hegemony.”

“Today, after two decades of bluff and lies, the remaining 2°C budget demands revolutionary change to the political and economic hegemony.”[1] That was in a blog posting last year by Kevin Anderson, Professor of Energy and Climate Change at Manchester University. One of Britain’s most eminent climate scientists, Anderson is also Deputy Director of the Tyndall Centre for Climate Change Research.

Or, we might take this blunt message, from an interview in November: “We need bottom-up and top-down action. We need change at all levels.”[2] Uttering those words was Tyndall Centre senior research fellow and Manchester University reader Alice Bows-Larkin. Anderson and Bows-Larkin are world-leading specialists on the challenges of climate change mitigation.

During December, the two were key players in a Radical Emission Reduction Conference, sponsored by the Tyndall Centre and held in the London premises of Britain’s most prestigious scientific institution, the Royal Society. The “radicalism” of the conference title referred to a call by the organisers for annual emissions cuts in Britain of at least 8 per cent – twice the rate commonly cited as possible within today’s economic and political structures.

The conference drew keen attention and wide coverage. In Sydney, the Murdoch-owned Daily Telegraph described the participants as “unhinged” and “eco-idiots,” going on to quote a “senior climate change adviser” for Shell Oil as stating:

“This was a room of catastrophists (as in ‘catastrophic global warming’), with the prevailing view…that the issue could only be addressed by the complete transformation of the global energy and political systems…a political ideology conference.”[3]

Indeed. The traditional “reticence” of scientists, which in the past has seen them mostly stick to their specialities and avoid comment on the social and political implications of their work, is no longer what it was.

Angered

Climate scientists have been particularly angered by the refusal of governments to act on repeated warnings about the dangers of climate change. Adding to the researchers’ bitterness, in more than a few cases, have been demands placed on them to soft-pedal their conclusions so as to avoid showing up ministers and policy-makers. Pressures to avoid raising “fundamental and uncomfortable questions” can be strong, Anderson explained to an interviewer last June.

“Scientists are being cajoled into developing increasingly bizarre sets of scenarios…that are able to deliver politically palatable messages. Such scenarios underplay the current emissions growth rate, assume ludicrously early peaks in emissions and translate commitments ‘to stay below [warming of] 2°C’ into a 60 to 70 per cent chance of exceeding 2°C.”[4]

Anderson and Bows-Larkin have been able to defy such pressures to the extent of co-authoring two remarkable, related papers, published by the Royal Society in 2008 and 2011.

In the second of these, the authors draw a distinction between rich and poor countries (technically, the UN’s “Annex 1” and “non-Annex 1” categories), while calculating the rates of emissions reduction in each that would be needed to keep average global temperatures within 2 degrees of pre-industrial levels.

The embarrassing news for governments is that the rich countries of Annex 1 would need to start immediately to cut their emissions at rates of about 11 per cent per year. That would allow the non-Annex 1 countries to delay their “peak emissions” to 2020, while developing their economies and raising living standards.

But the poor countries too would then have to start cutting their emissions at unprecedented rates – and the chance of exceeding 2 degrees of warming would still be around 36 per cent.[5] Even for a 50 per cent chance of exceeding 2 degrees, the rich countries would need to cut their emissions each year by 8-10 per cent.[6]

As Anderson points out, it is virtually impossible to find a mainstream economist who would see annual emissions reductions of more than 3-4 per cent as compatible with anything except severe recession, given an economy constituted along present lines.[7]

Four degrees?

What if the world kept its market-based economies, and after a peak in 2020, started reducing its emissions by this “allowable” 3-4 per cent? In their 2008 paper, Anderson and Bows-Larkin present figures that suggest a resulting eventual level of atmospheric carbon dioxide equivalent of 600-650 parts per million.[8] Climate scientist Malte Meinshausen estimates that 650 ppm would yield a 40 per cent chance of exceeding not just two degrees, but four.[9]

Anderson in the past has spoken out on what we might expect a “four-degree” world to be like. In a public lecture in October 2011 he described it as “incompatible with organised global community”, “likely to be beyond ‘adaptation’” and “devastating to the majority of ecosystems”. Moreover, a four-degree world would have “a high probability of not being stable”. That is, four degrees would be an interim temperature on the way to a much higher equilibrium level.[10]

Reported in the Scotsman newspaper in 2009, he focused on the human element:

“I think it’s extremely unlikely that we wouldn’t have mass death at 4C. If you have got a population of nine billion by 2050 and you hit 4C, 5C or 6C, you might have half a billion people surviving.”[11]

No wonder intelligent people are in revolt.

Market methods?

Anderson has also emerged as a powerful critic of the orthodoxy that emissions reduction must be based on market methods if it is to have a chance of working. His views on this point were brought into focus last October in a sharp rejoinder to United Nations climate-change chief – and market enthusiast – Rajendra Pachauri:

“I disagree strongly with Dr Pachauri’s optimism about markets and prices delivering on the international community’s 2°C commitments,” the British Independent quoted Anderson as saying. “I hold that such a market-based approach is doomed to failure and is a dangerous distraction from a comprehensive regulatory and standard-based framework.”[12]

Anderson’s critique of market-led abatement schemes centres on his conclusion that the two-degree threshold “is no longer deliverable through gradual mitigation, but only through deep cuts in emissions, i.e., non-marginal reductions at almost step-change levels.

“By contrast, a fundamental premise of contemporary neo-classical economics is that markets (including carbon markets) are only efficient at allocating scarce resources when the changes being considered are very small – i.e.marginal.

“For a good chance of staying below two degrees Celsius,” Anderson notes, “future emissions from the EU’s energy system … need to reduce at rates of around 10 per cent per annum – mitigation far below what marginal markets can reasonably be expected to deliver.”[13]

If an attempt were made to secure these reductions through cap-and-trade methods, he argues, “the price would almost certainly be beyond anything described as marginal (probably many hundreds of euros per tonne) – hence the great ‘efficiency’ and ‘least-cost’ benefits claimed for markets would no longer apply.”[14]

At the same time, the equity and social justice implications would be devastating. “A carbon price can always be paid by the wealthy,” Anderson points out.

“We may buy a slightly more efficient 4WD/SUV, cut back a little on our frequent flying, consider having a smaller second home…but overall we’d carry on with our business as usual. Meanwhile, the poorer sections of our society…would have to cut back still further in heating their inadequately insulated and badly designed rented properties.”[15]

Energy agenda

In the short-term, Anderson argues, a two-degree energy agenda requires “rapid and deep reductions in energy demand, beginning immediately and continuing for at least two decades.” This could buy time while a low-carbon energy supply system is constructed. A “radical plan” for emissions reduction, he indicates, is among the projects under way within the Tyndall Centre.[16]

The cost of emissions cuts, he insists, needs to fall on “those people primarily responsible for emitting.”[17] As quoted by writer Naomi Klein, Anderson estimates that 1-5 per cent of the population is responsible for 40-60 per cent of carbon pollution.[18]

While not rejecting price mechanisms in a supporting role, Anderson argues that the required volume of emissions cuts can only be achieved through stringent and increasingly demanding regulations. His “provisional and partial list” includes the following:

  •  Strict energy/emission standards for appliances with a clear long-term market signal of the amount by which the standards would annually tighten; e.g. 100gC02/km for all new cars commencing 2015 and reducing at 10 per cent each year through to 2030.
  • Strict energy supply standards; e.g. for electricity 350gCO2/kWh as the mean emissions level of a supplier’s portfolio of power stations; tightened at ~10 per cent per annum.
  • A programme of rolling out stringent energy/emission standards for industry equipment.
  • Stringent minimum efficiency standards for all properties for sale or rent.
  • World leading low-energy standards for all new-build houses, offices etc.

Enforcing these radical standards, he argues, “could be achieved, at least initially, with existing technologies and at little to no additional cost.”[19]

Economic growth

For a reasonable chance of keeping warming below 2 degrees, Anderson maintains, wealthier countries would need to forgo economic growth for at least ten to twenty years. Here, he bases himself on the conventional wisdom of “integrated assessment modellers”[20] – and arguably gets things quite wrong. Leading American climate blogger Joseph Romm last year came to sharply different conclusions:

“The IPCC’s last review of the mainstream economic literature found that even for stabilization at CO2 levels as low as 350 ppm, ‘global average macro-economic costs’ in 2050 correspond to ‘slowing average annual global GDP growth by less than 0.12 percentage points’.  It should be obvious the net cost is low. Energy use is responsible for the overwhelming majority of emissions, and energy costs are typically about 10 percent of GDP.”[21]

At a time when jobless workers abound, and large amounts of industrial capacity lie unused, mobilising resources and labour to replace polluting equipment could sharply increase Gross Domestic Product. Moreover, account needs to be taken of the absurdities of GDP itself – as a measurement tool that counts as useful activity building prisons and developing weapons systems. Anderson senses some of these contradictions when he states:

“Mitigation rates well above the economists’ 3 to 4 per cent per annum range may yet prove compatible with some form of economic prosperity.”[22]

Indeed, reconstructing our inefficient, polluting industrial system could allow the great majority of us to lead richer, more rewarding lives.

Reprisals

Where Anderson is not wrong is in anticipating, at various points in his blogging and interviews, that any serious move to cut emissions at the required rates will encounter fierce resistance. Huge industrial assets, primarily fossil-fuelled generating plant, would be “stranded”. Already-proven reserves of coal, oil and gas would need to be left in the ground.

Like the scientists accused in 2009 in the spurious “Climategate” affair, the people who spoke out at the Radical Emission Reduction Conference can now expect to feel the blow-torch of conservative reprisals.

Along with Anderson and Bows-Larkin, a particular target is likely to be Tyndall Centre Director Professor Corinne Le Quéré, who presented the scientific case for rapid emissions reduction. Four Australian academics who contributed via weblink, including noted climate scientist Mark Diesendorf, have already come under venomous personal attack in the Daily Telegraph.[23]

The “offence” committed by the Tyndall researchers is much greater than the loosely phrased e-mails that were seized on as the pretext for “Climategate.” With others in the climate-science community, these courageous people have shredded the pretence that polluter corporations and their supporting-act governments care a damn about preserving nature, civilisation, and human life.

Link: Antibiotics, Capitalism and the Failure of the Market

Last March 2013, England’s Chief Medical Officer, Dame Sally Davies gave the stark warning that antimicrobial resistance poses “a catastrophic threat” Unless we act now, she argued, “any one of us could go into hospital in 20 years for minor surgery and die because of an ordinary infection that can’t be treated by antibiotics. And routine operations like hip replacements or organ transplants could be deadly because of the risk of infection.”[1]

Over billions of years, bacteria have encountered a multitude of naturally occurring antibiotics and consequentially developed resistance mechanisms to survive. The primary emergence of resistance is random, coming about by DNA mutation or gene exchange with other bacteria. However, the further use of antibiotics then favours the spread of those bacteria that have become resistant.

More than 70% of pathogenic bacteria that cause healthcare acquired infections are resistant to at least one the drugs most commonly used to treat them.[2][3] Increasing resistance in bacteria like Eschericha coli (E. coli) is a growing public health concern due to the very limited therapy options for infections caused by E. coli. This is particularly so in E .coli that is resistant to carbapenem antibiotics, the drugs of last resort.

The emergence of resistance is complex issue involving inappropriate and over use of antimicrobials in humans and animals. Antibiotics may be administered by health professionals or farmers when they are not required or patients may take only part of a full course of treatment. This provides bacteria the opportunity to encounter the otherwise life-saving drugs, at ineffective levels and survive mutation to produce resistant strains. Once created, these resistant strains have been allowed to spread by poor infection control and regional surveillance procedures.

These two problems are easily solved by educating healthcare professionals, patients and animal keepers about the importance of antibiotic treatment regimens and keeping to them. Advocating good infection control procedures in hospitals and investment in surveillance programs monitoring patterns of resistance locally and across the country would reduce the spread of infection. However, the biggest problem is capitalism and the fact that there is not a supply of new antimicrobials.

Between 1929 and the 1970s pharmaceutical companies developed more than twenty new classes of antimicrobials.[4][5] Since the 1970s only two new categories of antimicrobials have arrived.[6][7] Today the pipeline for new antibiotic classes active against highly resistant Gram negative bacteria is dry [8][9][10] the only novel category in early clinical development has recently been withdrawn.[9][11]

For the last seventy years the human race has kept itself ahead of resistant bacteria by going back into the laboratory and developing the next generation of antimicrobials. However, due to a failure of the market, pharmaceutical companies are no longer interested in developing antibiotics.

Despite the warnings from Dame Sally Davies, drug companies have pulled back from antimicrobial research because there is no profit to be made from it. When used appropriately a single £100 course of antibiotics will save someone’s life. However, that clinical effectiveness and short-term use has the unfortunate consequence of making antimicrobials significantly less profitable than the pharmaceuticals used in cancer therapy, which can cost £20,000 per year.

In our current system, a drug company’s return on their financial investment in antimicrobials is dependent on their volume of sales. A further problem arises when we factor in the educational programs aimed at teaching healthcare professionals and animal keepers to limit their use of antimicrobials. This combined with the relative unprofitability has produced a failure in the market and a paradox for capitalism.

A response commonly proposed by my fellow scientists, is that our government must provide incentives for pharmaceutical companies to develop new antimicrobial drugs. Suggestions are primarily focused around reducing the financial risk for drugs companies and include grants, prizes, tax breaks, creating public-private partnerships and increasing intellectual property protections. Further suggestions are often related to removing “red tape” and streamlining the drug approval and clinical trial requirements.

In September 2013 the Department of Health published its UK Five Year Antimicrobial Resistance Strategy.[12] The document called for “work to reform and harmonise regulatory regimes relating to the licencing and approval of antibiotics”, better collaboration “encouraging greater public-private investment in the discovery and development of a sustainable supply of effective new antimicrobials” and states that “Industry has a corporate and social responsibility to contribute to work to tackle antimicrobial resistance.”

I think we should have three major objections to these statements. One, the managers in the pharmaceutical industry do not have any responsibility to contribute to work to tackle antimicrobial resistance. They have a responsibility to practice within the law or be fined and make profit for shareholders or be replaced. It is the state that has the responsibility for the protection and wellbeing of its citizens.

Secondly, following this year’s horsemeat scandal we should object to companies cutting corners in attempt to increase profits. This leads on to the final objection, that by promoting public-private collaboration all the state is doing, is subsidising share holder profits by reducing the shareholder’s financial risk.

The market has failed and novel antimicrobials will require investment not based on a financial return from the volume of antibiotics sold but on the benefit for society of being free from disease.

John Maynard Keynes in his 1924, Sydney Ball Foundation Lecture at Cambridge, said “the important thing for government is not to do things which individuals are doing already, and to do them a little better or a little worse; but to do those things which at present are not done at all”.[13] Mariana Mazzucato in her 2013 book, The Entrepreneurial State, discusses how the state can lead innovation and criticises the risk and reward relationships in current public-private partnerships.[14] Mazzacuto argues that the state can be entrepreneurial and inventive and that we need to reinvent the state and government.

This praise of the potential of the state seems to be supported by the public as following announcements of energy price rises, in October 2013, a YouGov poll found that 12 to 1 people were against the NHS being run by the private sector; 67% in favour of Royal Mail being run in the public sector; 66% want railway companies to be nationalised and 68% are in favour of nationalised energy companies.[15]

We should support state funded professors, post-doctoral researchers and PhD students as scientists working within the public sector. They could study the mechanisms of drug entry into bacterial cells or screen natural antibiotic compounds. This could not be done on a shoestring budget and it would no doubt take years to build the infrastructure but we could do things like make the case for where the research took place.

Andrew Witty’s recent review of higher education and regional growth asked universities to become more involved in their local economies.[16] The state could choose to build laboratories in geographical areas neglected by private sector investment and help promote regional recovery. Even more radically, if novel antibiotics are produced for their social good rather than financial gain, they can be reserved indefinitely until a time of crisis.

With regard to democracy, patients and the general public could have a greater say in what is researched and to help shift us away from our reliance on the market to provide what society needs.  The market responds, not to what society needs, but to what will create the most profit. This is a reoccurring theme throughout science. I cannot begin to tell you how frequently I listen to case studies regarding parasites which only affect people in the developing world. Again, the people of the developing world have very little money so drug companies neglect to develop drugs as there is no source of profit. We should make the case for innovation not to be driven by greed but for the service of society and even our species.

Before Friedrich Hayek, John Desmond Bernal in his 1939 book, The Social Function to Science, argued for more spending on innovation as science was not merely an abstract intellectual enquiry but of real practical value.[17] Bernal placed science and technology as one of the driving forces of history. Why should we not follow that path?

Link: Homo Scientificus According to Beckett

DAVIDSON: The original title suggested to our speaker by our valiant organizer was Basic Research Responsibilities. The title submitted by the speaker to the calendar is ”Homo Scientificus According to Beckett”. As far as I know there are two Becketts in history. One of them got killed in a cathedral and the other got a Nobel Prize for writing plays. That’s all I know about the seminar and I’m looking forward to hearing it.

DELBRÜCK: In December 1970 Bill Beranek wrote me a letter saying that he wanted one of these sessions devoted to the subject: “The Responsibility of the Scientist to Society with Respect to Pure Basic Research”. He added a number of questions, which I will quickly answer, as best I can.

Q. 1: Is pure science to be regarded as overall beneficial to society?

A: It depends much on what you consider benefits. If you look at health, long life, transportation, communication, education, you might be tempted to say ”yes”. If you look at the enormous social-economic dislocations, and at strains on our psyches due to the imbalance between technical developments and our limited ability to adjust to the pace of change, you might be tempted to say “no”. Clearly, the present state of the world — to which science has contributed much — leaves a great deal to be desired, and much to be feared, so I write down:

(1) Q: SCIENCE BENEFICIAL? A: DOUBTFUL.

Q. 2: Is pure science to be considered as something potentially harmful?

A: Most certainly! Every child knows that it is potentially exceedingly harmful. Our lecture series here on environmental problems concerns just a small aspect. The menace of blowing ourselves up by atom bombs, doing ourselves in by chemical or biological warfare, or by population explosion is certainly with us. I consider the environment thing, a trivial question, by comparison, like housekeeping. In any home, the dishes have to be washed, the floors swept, the beds made, and there must be rules as to who is allowed to produce how much stink and noise, and where in the house. When the garbage piles up, these questions become pressing. But they are momentary problems. Once the house is in order, you still want to live in it, not just sit around enjoying its orderliness. I would be sorry to see Caltech move heavily into this type of applied research.

(2) Q: SCIENCE POTENTIALLY HARMFUL? A: DEFINITELY.

Q. 3: Should a scientist consider possible ramifications of his research and their effects on society, or is this something not only difficult to do but perhaps better done by others?

A: I think it is impossible for anybody, scientist or not, to foresee the ramifications. We might say that that is a definition of basic science. Vide Einstein’s discovery in 1905 of the equivalence of mass and energy and the development of atomic weaponry.

(3) Q: CONSIDER RAMIFICATIONS? A: IMPOSSIBLE.

So much for Bill‘s original questions in December.

I agreed to come to the lectures and then decide whether I thought I had something to contribute. After having listened to a series of lectures on environmental problems, such as lead poisoning, mercury poisoning, on smog, on waste disposal, on fuel additives, and to Dan Kevles’ and George Hammond’s more general talks, I told Bill that I had found the series interesting and worthwhile but that I felt most uneasy about where I might fit in. So he wrote me another letter. Tenacious guy. With more questions. These again I can answer in short order.

Q. 4: Why did you choose science as your life’s work?

A: I think the most relevant answer that I can give to this question is this: I found out at an early age that science is a haven for the timid, the freaks, the misfits. That is more true perhaps for the past than now. If you were a student in Göttingen in the 20’s and went to the seminar ”Structure of Matter” which was under the joint auspices of David Hilbert and Max Bon as you walked in there, you could well imagine that you were in a madhouse. Every one of the persons there was obviously some kind of a severe case. The least you could do was put on some kind of a stutter. Robert Oppenheimer as a graduate student found it expedient to develop a very elegant kind of stutter, the ”njum- njum-njum”-technique. Thus, if you were an oddball you felt at home.

(4)Q: WHY SCIENTIFIC CAREER? A: A HAVEN FOR FREAKS.

Q. 5: What is the history of your research?

A: Perhaps the most relevant aspect is that it throve under adversity. The two periods that I have in mind were (1) in Germany in the middle 30’s Under the Nazis when things became quite unpleasant and official seminars became dull. Many people emigrated, others did not leave but were not being permitted to come to official seminars. We had a little private club which I had organized and which met about once a week, mostly at my mother’s house. First just theoretical physicists (I was at that time a theoretical physicist), and then theoretical physicists and biologists. The discussions we had at that time have had a remarkable long-range effect, an effect which astonished us all. This was one adverse Situation. Like the great Plague in Florence in 1348 which is the background setting for Bocaccio’s Decahedron. The other one was in this country in the 40’s during the war. I came over in ‘37 and was in this Country during the war as an enemy alien. And as an enemy alien I secured a job as an instructor of physics at Vanderbilt University in Nashville, Tennessee. You might think that this was a very unpropitious place to be, but it worked out fine. I spent 7 1/2 years there. This situation gave me, in association with Luria (another enemy alien) and in close contact with Hershey (another misfit in society) sufficient leisure to do the first phase of phage research which has become a cornerstone of molecular genetics.

I would not want to generalize to the extent that adversity is the only road to effective innovative science or art, but the progress of science Is often spectacularly disorderly. James Joyce once commented that he survived by “cunning and exile” (and you might add. by a genius for borrowing money from a number of ladies). I got along all right with the head of the Physics Department at Vanderbilt. He wanted me to do as much physics teaching as possible and as little biology research as possible. I had the opposite desires. We understood each other’s attitudes and accommodated each other to a reasonable extent. So, things worked out quite well. At the end of the war I was the oldest instructor on the campus.

[5]Q: HISTORY OF YOUR RESEARCH? A: THROVE UNDER ADVERSITY.

Q. 6: Why do you think society should pay for basic research?

A: Did I say that society should pay for basic research? I didn’t. Society does so to a varying extent, and it always astonishes me that it does. It has been part of the current dogma that basic research is good for society but I would be the last to be dogmatic about the number of dollars society should put up for this goodness. Since I answered the first question with “Doubtful”, I cannot very well be emphatic in answer to this one.

[6]Q: SOCIETY PAY FOR RESEARCH? A: HOW MUCH?

Q.7: How much control do you feel society should have on deciding which questions you should ask in your research?

A: Society can, and does, and must control research enormously, negatively and positively, by selectively cutting off or supplying funds. At present it cuts — not so selectively. That is all right with me, as far as my own research is concerned. I certainly do not think society owes me a living, or support for my research. If it does not support my research, I can always do something else and not be worse off, perhaps better. However, the question, from society’s point of view, is exceedingly complicated. I have no strong views on the matter.

(7) Q: CONTROL OF RESEARCH BY SOCIETY? A: A COMPLICATED MATTER, LARGELY OF PROCEDURE.

Q8: Is there an unwritten scientific oath analogous to the Hippocratic oath which would ask all scientists to use their special expertise and way of thinking to guard against the bad effects of science on society, especially today when science is acknowledged to play such a large part in the lives of individuals?

A: The original Hippocratic oath, of course, says that you should keep the patient alive under all circumstances. Also that you shouldn’t be bribed, shouldn’t give poisons, should honor your teachers, and things like that, but essential1y to keep the patient alive. And that’s a reasonably well defined goal since keeping the patient alive is biologically unambiguous. But to use science for the good of society is not so well defined, therefore I think such an oath could never be written. The only unwritten oath is of course that you should be reasonably honest, and that is in fact carried out to the extent that, although many things that you read in the journals are wrong, it is assumed that the author at least believed that he was right. So much so that if somebody deliberately sets out to cheat he can get away with it for years. There are a number of celebrated cases of cheating or hoaxes that would make a long story. But our whole scientific discourse is based on the premise that everybody is trying at least to tell the truth, within the limits of his personality; that can be some limit.

[8]Q: HIPPOCRATIC OATH? A: IMPOSSIBLE TO BE UNAMBIGUOUS

Q. 9: Is science something we do mainly for its own sake, like art or music” or is it something we use as a tool for bettering our physical existence?

A: This is a question that turns me on. I think that it bristles with popular misconceptions about the nature of Homo scientificus, and therefore maybe I have something to say. Let me start by reading a few passages from a paper on this species, hitherto unpublished, written in 1942 by a rather perceptive friend … a non-scientist:

The species Homo scientificus constitute a branch of the family Homo modernibus, a species easy and interesting to observe but difficult and perplexing to understand. There are a number of varieties and sub-varieties ranging from the lowliest to the highest. We begin with the humble professorius scientificus, whose inclusion in this species is questionable, pass on up through the geologia and the large groups of the chemisto and biología, with their many hybrids, to the higher orders of the physicistus and mathematicus, and finally to the lordly theoretica physicistus. rarely seen in captivity.

Habitat: These animals range the North American and European continents, and are seldom seen in South America, Africa, or Asia, although a few isolated cases are known in Australia and Russia. [This was written in 1942.) Individuals of the lower orders thrive in most sections of Europe and America but those of the higher orders are to be found only in a few localities, where they live together in colonies. These colonies provide a valuable research field;’ here one can wander about noting the size, structure, and actions of these peculiar creatures. There is little to fear, for although they may ap- proach one with great curiosity, and attempt to lead one to the1r lairs, they are not known to be dangerous.

Description: Recent studies of this as yet little-understood s~ des have ascertained a number of characteristics by which they m~ be distinguished. The brain is large and often somewhat soft in spots. In some cases the head is covered with masses of thick, unkempt wool, in others it is utterly devoid of hair and shines like a doorknob. Sometimes there is hair on the face hut it never covers the nose. The body covering, when there is any, is without particular color or form, the general appearance is definitely shaggy. The male scientificus does not, like the cock or the lion or the bull, delight in flaunting elegantly before the female to catch her eye. Evidently the female is attracted by some other method. We are at a 1055 as to what this could be, although we have often observed the male scurrying after the female with a wuffley expression on this face. Sometimes he brings her a little gift, such as a bundle of bristles or a bright piece of cellophane, which she accepts tenderly and the trick is done. Occasionally an old king appears from the colony, surrounded by workers. He has soft grey hair on his face, and a pot belly. Scientificus is a voracious eater; this 15 not strange for he consumes a great deal of energy each day in playing. In fact, he is one of the best play- ing animals known.

The scientificus undoubtedly have a language of their own. They take pleasure in jabbering to each other and often one will stand several hours before a group, holding forth in a monologues; the listeners are for the most part quiet, and some may even be asleep. However meaningful this language may be to them, it is utterly incomprehensible to us. Perhaps the thing which endears this mysterious creature to us most is his disposition; although there exists a kind of slavery (the laboratorio assistantia being captured to do the dirty work), the scientificus does not prey on other animals of his species and he is neither cruel, sly, nor domineering. [The author had only studied the species for one year at that time-] He is an easygoing animal; he will not, for example, work hard to construct a good dwelling, but is content to live in a damp basement so long as he can spend most of the day sitting in the sun and rummaging among his strange possessions.

The paper then goes on into more detail about the biologia. We will let this suffice by way of a general description of Homo scientificus. The description is nice as far as it goes, but too superficial.

Now I want to switch gears and read another piece which I think goes to the heart of the matter. This is taken from the novel Molloy by Samuel Beckett. Beckett not only wrote plays, Happy Days, Krapp’s Last Tape, End Game, and Waiting for Godo" — but also a number of novels that are less well known. This one, Molloy, published in the ‘50s, concerns an exceedingly lonely and decrepit old man, and the whole book is a kind of a soliloquy that he writes down about his life. I have picked one episode that I hope will illustrate the point I want to make (without having to rub it in too much). There will be slides to go with this reading so as to make the argument perfectly clear. At the time of this episode Molloy is a beachcomber at some lonely place.

I took advantage of being at the seaside to lay in a store of sucking-stones. They were pebbles but I call them stones. Yes, on this occasion I laid in a considerable store. I distributed them equally between my four pockets, and sucked them turn and turn about. This raised a problem which I first solved in the following way. I had say sixteen stones, four in each of my four pockets these being the two pockets of my trousers and the two pockets of my greatcoat.

Taking a stone from the right pocket of my greatcoat, and putting it in my mouth, I replaced it in the right pocket of my greatcoat by a stone from the right pocket of my trousers, which I replaced by a stone from the left pocket of my trousers, which I replaced by a stone from the left pocket of my greatcoat, which I replaced by the stone which was in my mouth, as soon as I had finished sucking it. Thus there were still four stones in each of my four pockets, but not quite the same stones. And when the desire to suck took hold of me again, I drew again on the right pocket of my greatcoat, certain of not taking the same stone as the last time.  And while I sucked it I rearranged the other stones in the way I have just described. And so on.

But this solution did not satisfy me fully. For it did not escape me that, by an extraordinary hazard, the four stones circulating thus might always be the same four. In which case, far from sucking the sixteen stones turn and turn about, I was really only sucking four, always the same, turn and turn about. But I shuffled them well in my pockets, before I began to suck, and again, while I sucked, before transferring them, in the hope of obtaining a more general circulation of the stones from pocket to pocket. But this was only a makeshift that could not long content a man like me. So I began to look for something else.

And the first thing I hit upon was that I might do better to transfer the stones four by four, instead of one by one, that is to say, during the sucking, to take the three stones remaining in the right pocket of my greatcoat and replace them by the four in the right pocket of my trousers , and these by the four in the left pocket of my trousers, and these by the four in the left pocket of my greatcoat, and finally these by the three from the right pocket of my greatcoat, plus the one, as soon as I had finished sucking it, which was in my mouth.  Yes, it seemed to me at first that by so doing I would arrive at a better result.

But on further reflection I had to change my mind and confess that the circulation of the stones four by four came to exactly the same thing as their circulation one by one. For if I was certain of finding each time, in the right pocket of my greatcoat, four stones totally different from their immediate predecessors, the possibility nevertheless remained of my always chancing on the same stone, within each group of four, and consequently of my sucking, not the sixteen turn and turn about as I wished, but in fact four only, always the same, turn and turn about. So I had to seek elswhere than in the mode of circulation. For no matter how I caused the stones to circulate, I always ran the same risk.

It was obvious that by increasing the number of my pockets I was bound to increase my chances of enjoying my stones in the way I planned, that is to say one after the other until their number was exhausted. Had I had eight pockets, for example, instead of the four I did have, then even the most diabolical hazard could not have prevented me from sucking at least eight of my sixteen stones, turn and turn about. The truth is I should have needed sixteen pockets in order to be quite easy in my mind. And for a long time I could see no other conclusion than this, that short of having sixteen pockets, each with its stone, I could never reach the goal I had set myself, short of an extraordinary hazard. And if at a pinch I could double the number of my pockets, were it only by dividing each pocket in two, with the help of a few safety-pins let us say, to quadruple them seemed to be more than I could manage. And I did not feel inclined to take all that trouble for a half-measure.

For I was beginning to lose all sense of measure, after all this wrestling and wrangling, and to say, All or nothing. And if I was tempted for an instant to establish a more equitable proportion between my stones and my pockets , by reducing the former to the number of the latter, it was only for an instant. For it would have been an admission of defeat. And sitting on the shore, before the sea, the sixteen stones spread out before my eyes, I gazed at them in anger and perplexity.  For just as I had difficulty in sitting in a chair, or in an arm-chair, because of my stiff leg, you understand, so I had none in sitting on the ground, because of my stiff leg and my stiffening leg, for it was about this time that my good leg, good in the sense that it was not stiff, began to stiffen.  I needed a prop under the ham you understand, and even under the whole length of the leg, the prop of the earth.  And while I gazed thus at my stones, revolving interminable martingales all equally defective, and crushing handfuls of sand, so that the sand ran through my fingers and fell back on the strand, yes, while thus I lulled my mind and part of my body, one day suddenly it dawned on me, dimly, that I might perhaps achieve my purpose without increasing the number of my pockets, or reducing the number of my stones, but simply by sacrificing the principle of trim.

The meaning of this illumination, which suddenly began to sing within me, like a verse of Isaiah, or of Jeremiah, I did not penetrate at once, and notably the word trim, which I had never met with, in this sense, long remained obscure. Finally I seemed to grasp that this word trim could not here mean anything else, anything better, than the distribution of the sixteen stones in four groups of four, one group in each pocket, and that it was my refusal to consider any distribution other than this that had vitiated my calculations until then and rendered the problem literally insoluble. And it was on the basis of this interpretation, whether right or wrong, that I finally reached a solution, inelegant assuredly, but sound, sound.

Now I am willing to believe, indeed I firmly believe, that other solutions to this problem might have been found and indeed may still be found, no less sound, but much more elegant than the one I shall now describe, if I can.  And I believe too that had I been a little more insistent, a little more resistant, I could have found them myself.  But I was tired, but I was tired, and I contented myself ingloriously with the first solution that was a solution, to this problem.  But not to go over the heartbreaking stages through which I passed before I came to it here it is, in all its hideousness.

All (all!) that was necessary was to put, for example, six stones in the right pocket of my greatcoat, or supply pocket, five in the right pocket of my trousers, and five in the left pocket of my trousers, that makes the lot, twice five ten plus six sixteen, and none, for none remained, in the left pocket of my greatcoat, which for the time being remained empty, empty of stones that is, for its usual contents remained, as well as occasional objects.  For where do you think I hid my vegetable knife, my silver, my horn and the other things that I have not yet named, perhaps shall never name.  Good. Now I can begin to suck. Watch me closely. I take a stone from the right pocket of my greatcoat , suck it, stop sucking it, put it in the left pocket of my greatcoat, the one empty (of stones). I take a second stone from the right pocket of my greatcoat, suck it put it in the left pocket of my greatcoat. And so on until the right pocket of my greatcoat is empty (apart from its usual and casual contents) and the six stones I have just sucked, one after the other, are all in the left pocket of my greatcoat.

Pausing then, and concentrating, so as not to make a balls of it, I transfer to the right pocket of my greatcoat, in which there are no stones left, the five stones in the right pocket of my trousers, which I replace by the five stones in the left pocket of my trousers, which I replace by the six stones in the left pocket of my greatcoat. At this stage then the left pocket of my greatcoat is again empty of stones, while the right pocket of my greatcoat is again supplied, and in the vright way, that is to say with other stones than those I have just sucked. These other stones I then begin to suck, one after the other, and to transfer as I go along to the left pocket of my greatcoat, being absolutely certain, as far as one can be in an affair of this kind, that I am not sucking the same stones as a moment before, but others.

And when the right pocket of my greatcoat is again empty (of stones), and the five I have just sucked are all without exception in the left pocket of my greatcoat, then I proceed to the same redistribution as a moment before, or a similar redistribution, that is to say I transfer to the right pocket of my greatcoat, now again available, the five stones in the right pocket of my trousers, which I replace by the six stones in the left pocket of my trousers, which I replace by the five stones in the left pocket of my greatcoat. And there I am ready to begin again. Do I have to go on? No, for it is clear that after the next series, of sucks and transfers, I shall be back where I started, that is with the first six stones back in the supply pocket, the next five in the right pocket of my stinking old trousers and finally the last five in left pocket of same, and my sixteen stones will have been sucked once at least in impeccable succession, not one sucked twice, not one left unsucked.

It is true that next time I could scarcely hope to suck my stones in the same order as the first time and that the first, seventh and twelfth for example of the first cycle might very well be the sixth, eleventh, and sixteenth respectively of the second, if the worst came to the worst.  But this was a drawback I could not avoid.  And if in the cycles taken together utter confusion was bound to reign, at least within each cycle taken separately I could be easy in my mind, at least as easy as one can be, in a proceeding of this kind.  For in order for each cycle to be identical, as to the succession of stones in my mouth, and God knows I had set my heart on it, the only means were numbered stones or sixteen pockets.  And rather than make twelve more pockets or number my stones, I preferred to make the best of the comparative peace of mind I enjoyed within each cycle taken separately.

For it was not enough to number the stones, but I would have had to remember, every time I put a stone in my mouth, the number I needed and look for it in my pocket.  Which would have put me off stone for ever, in a very short time.  For I would never have been sure of not making a mistake, unless of course I had kept a kind of register, in which to tick off the stones one by one, as I sucked them.  And of this I believed myself incapable.  No, the only perfect solution would have been the sixteen pockets, symmetrically disposed, each one with its stone.  Then I would have needed neither to number nor to think, but merely, as I sucked a given stone, to move on the fifteen others, a delicate business admittedly, but within my power, and to call always on the same pocket when I felt like a suck.  This would have freed me from all anxiety, not only within each cycle taken separately, but also for the sum of all cycles, though they went on forever.

But however imperfect my own solution was, I was pleased at having found it all alone, yes, quite pleased.  And if it was perhaps less sound than I had thought in the first flush of discovery, its inelegance never diminished.  And it was above all inelegant in this, to my mind, that the uneven distribution was painful to me, bodily.  It is true that a kind of equilibrium was reached, at a given moment, in the early stages of each cycle, namely after the third suck and before the fourth, but it did not last long, and the rest of the time I felt the weight of the stones dragging me now to one side, now to the other.  There was something more than a principle I abandoned, when I abandoned the equal distribution, it was a bodily need. But to suck the stones in the way I have described, not haphazard, but with method, was also I think a bodily need. Here then were two incompatible bodily needs, at loggerheads. Such things happen.

But deep down I didn’t give a tinker’s curse about being off my balance, dragged to the right hand and the left, backwards and forewards. And deep down it was all the same to me whether I sucked a different stone each time or always the same stone, until the end of time. For they all tasted exactly the same. And if I had collected sixteen, it was not in order to ballast myself in such and such a way, or to suck them turn about, but simply to have a little store, so as never to be without. But deep down I didn’t give a fiddler’s curse about being without, when they were all gone they would be all gone, I wouldn’t be any the worse off, or hardly any.  And the solution to which I rallied in the end was to throw away all the stones but one, which I kept now in one pocket, now in another, and which of course I soon lost, or threw away, or gave away, or swallowed.

This is the parable of the Homo scientificus that I wanted to present. I want to stress two particular things in it. One is the uncanny description of scientific intuition. This is exactly the way Einstein must have struggled to explain the failure of all experiments attempting to demonstrate a motion of the earth relative to the “light-medium,” until he very dimly realized that he had to abandon some “principle of trim,” the principle of absolute time, and this must have come in some such way as here described. There people have described intuition in cases where they were able to reconstruct a little of it. Jacques Hadamard, the French mathematician, has written a little book, An Essay on the psychology of Intuition in the MathematicaI Field, which is a collection of data on this phenomenon and describes how intuition wells up from completely unfathomable depths, first appears in a peculiar guise, and then suddenly breaks out with lightning clarity. Second, let us look at Molloy’s motivation. He certainly is not motivated by the goal of bettering our physical existence or desire for fame or acclaim. Does he do his work for its own sake, like art and music? He describes in detail how his little game “for its own sake” becomes an obsession beyond all measure of reason. This is not the way you and I do art or music, but it does resemble closely the way the creative artists and composers do it. You don’t have to look at Beethoven to become convinced of that. Look at any child of five who is obsessed with a creative problem and breaks out in anger and lustration at his failures.

This obsessive fixation picks on anything, quite oblivious of its meaningful content of “revealing the truth about nature” or “bettering our physical existence”. It is this quirk of our make-up, this sublimation of other psychic forces, that was delivered by evolution to cave man.

More was here delivered by evolution than had been ordered. It carried us from cave man to space man, and may well carry us to our destruction. And why not? The little earthquake we had the other day should have served all of us as a timely reminder, if any reminding is needed, that we are Dot here to stay, Dot as individuals, nor as families, nor as nations, nor as the human race, nor as a planet with life on it. There is uncertainty merely as to the time scale.

The point I wanted to make is this. Man is not only Homo faber, the tool maker. The grand edifice of Science, built through the centuries by the efforts of many people in many nations, gives you the illusion of an immense cathedral, erected in an orderly fashion according to some master plan. However, there never was a master plan. The edifice is a result of channeling our intellectual obsessive forces into the joint program In spite of this channeling, the progress of Science at all times has been and still is immensely disorderly for the very reason that there can be no master plan.

So, what could we do if we decided that innovative Science is too dangerous? I don’t know, but one thing is certain: it would take a lot of manipulation of man — political, economic, nutritional, genetic — if you tried to control Homo scientificus.

Discussion

Q: How can man with these characteristics resist considering implications? This doesn’t mean solving them — just considering them.

A: I understood the question to mean: if I make a discovery, should I consider the implications and maybe not publish it even if it’s a basic discovery. I think that it is impossible to foretell the implications. I couldn’t agree more that you should consider the implications, say, of the genetic manipulation of mankind. You can’t help It’s of the utmost importance. Same with “population zero”. I just don’t consider this as the same thing as doing science, this business of considering the implications. It’s something entirely different, as I explained in answer to Q 2.

Q: It seems to me that many human beings are subject to neurotic obsession. But it’s not clear how we choose problems. It seems to
me conceivable that one might choose a problem because somebody tells you that it’s an important problem for science and you can get upset about why the hell can’t I solve it even if you don’t care about the problem.

A: I agree. Science gives the impression of being a magnificent cathedral, an enormous structure — a well constructed thing, a cathedral built by the continuous effort of many generations through many centuries. Of course it isn’t a cathedral because it wasn’t planned. Nobody planned the scientific cathedral. To the student it looks as though it were planned. The student gets three volumes of Feynman lectures, I300 pages of a splendid textbook of ”Organic Chemistry”, and other textbooks, and says “Aha” I50 years ago today they got this far. In the meantime all this was constructed, and now I continue here. “My point is that science is not that at all. Science is primarily playing willfully, and getting obsessed with it, and it is not being told: ”Here, add your brick on page 1065 and do it properly or we won’t give you a PhD.” Such a student, if you ask him what he is doing, may possibly answer, “I am building a cathedral.” More likely, he will say, “I am laying bricks,” or even “I am making $4.50 an hour.”

Q: You didn’t say how much society should support science.

A: I didn’t answer it. No. I’m not interested.

Q: Should we not think about the support of science?

A: Oh, I don’t want to think about lt. No, it’s a very complicated thing. Obviously the high-energy physicists want ever bigger machines that cost a hundred million, billion, etc., and they say the military spend more and the military say if we stop making war the economy will break down. These are al! questions that are not very interesting. To me, anyway.

Q: Can you tell us how your illustrations came into being?

A: We had a party last week and at this party Dick Russell performed these acts while I was reading the story. He didn’t know the story, he just learned of it as it developed. Everybody had a drawing block
in front of them and sketched as Dick posed. The old trousers were Dick’s, the coat Vivian Hill’s. The prize winning artists were Felicia Hargreaves from our Art Center, and Vivian Hill. The first paper from which I quoted, on
Homo scientificus, some of you may be interested to know, was written by a graduate at Scripps College. She had married a scientist the year before she wrote the paper.

Q: Would you be willing to relax a little bit on your attitude with respect to question 1, namely the question whether science is beneficial? Would you say this depends on how you define beneficial?

A: Sure. If we measure it in terms of energy production or Infant mortality then it’s beneficial.

Q: Well, I think it’s very difficult lo say what is beneficial.

A: Yes. That’s why I put a “Doubtful” there. I didn’t answer “No”.

Q: Most of the problem with science is that we don’t even know what’s beneficial to society.

A: However, we can hardly evade the fact that the present state of the world leaves much lo be desired, and that this is largely a result of the efforts of people like Molloy.

Q: Then one might talk about whether the earlier stage of the world was an awful lot better.

A: Sure. Of course you can. You can. Please do. I don’t feel like arguing.

Q: Do you think it is common that scientists proceed in a way that IS neurotic? Don’t you think that occasionally they do something just because it’s interesting?

A: I didn’t mean to use the term neurotic in a derogatory way. Our culture is a product of our neuroses — I mean a product of the diversion of psychic forces from their original function into other directions.

Q: How could you do your research with such a pessimistic attitude? Did you have the same attitude when you started out?

A: I can’t answer that — how I was 40 years ago. If you call it pessimistic, I’m a very cheerful pessimist. I think there’s something to be said for the pessimist. It merely means not glossing over some basic facts.

Q: Your picture of a scientist is very personal, so your answer to the first question. “Is science beneficial I”, would be ”Yes, it’s beneficial to the doer.” Molloy’s pebbles were the same to him as special relativity was to Einstein and the hydrogen bomb to Edward Teller. The difference is that Molloy wasn’t going to hurt anybody. Now, if you say that science is beneficial to the scientist because he gets satisfaction from it, and the scientist isn’t thinking about the implications, does this imply that somebody else should think about the implications and say, “Molloy, you’re OK; Einstein, you’re doubtful; Teller, you’re out”? Who should make these decisions?

A: My point was that that’s quite impossible. Molloy and Einstein are identical. Einstein’s worrying about the Michelson-Morley experiment was just as esoteric as shuffling around the sucking stones. I mean that nothing could be more impersonal, impractical, more remote from any social implications than what Einstein did in I905.

To him, anyhow. Later on when the atomic arms race escalated one more round, and Einstein considered that he had been involved in their starting the atom bomb, he regretted that he had ever entered science, etc., but I don’t think he really had though about how deeply science is part of human nature. I think discoveries are all potentially equally harmful - like the circulating of the sucking stones. Maybe Molloy is discovering a principle of permutation or number theory — God only knows the implications of this. Didn’t the pictures look like some of the metal organic covalent bond shifting there? Didn’t Harry Gray get an inspiration from it for something that’s going to be utilized in some horrible contraption in a few years?

Q: Can you draw a distinction in terms of creativity between Einstein thinking up ideas and Edward Teller making bombs — one being playful and the other being purposeful?

A: I don’t have to make this distinction because, if I want to control the bad effects of science, I have to stop Einstein. Why should I try to make a distinction between him and Teller? Teller is an excellent scientist. Although I don’t know what he specifically did with the B-bomb, he certainly contributed a great deal to quantum mechanics and chemical physics. So then the question is, should the scientist stop publishing his science so that the bad appliers won’t misuse it? Rave a private club. I had a slide of that which got lost. I found it at MIT. A poster with a quotation from Einstein saying how sorry he was that he had ever, etc., and that if he could start life again he would just become a lighthouse keeper or something like that. Underneath on this poster there was an invitation from somebody saying: “Will you join us in a commune of scientists who will talk among ourselves and not publish anything — just do it by ourselves?” And somebody had scrawled on the side: “Commie”. The idea of doing science in a commune and not publishing it seems absurd to me. Why should we get together to follow these pursuits which are not really pleasurable? Molloy had a certain relief and was satisfied that he had found a solution, but the main thing for him was that he was easy in his mind. As easy as one can be in a matter of this kind; suck them turn and turn about. I mean, he had to relieve the uneasiness of his mind. That’s where the neurosis comes in - - the obsession.

Q: I’ve been uneasy without being able to articulate it very well, because it seems to me that you say something about the personal obsessions of scientists and the irrelevance of the goal or consideration of a moral principle in their work, and I think it’s probably only a half-truth. Einstein was a deeply moral man, very concerned. I have a feeling that scientists in their work are buoyed and reinforced by the belief that the answer to question I a “Yes”.

A: Yes, of course you can be buoyed by the feeling you’ve done society good; you can be buoyed by the fooling that you’re acquiring fame and prizes. My point is this: prior to these reinforcements, and more fundamental, even the lonely, decrepit beachcomber cannot avoid being a scientist, in an obsessive way (exactly the way Einstein was), although both the accessory components are missing. As for Einstein as a young patent clerk in Berne, in I905, I doubt that! He then made a connection between his physics and his responsibilities to society. That’s the point I wanted to mate. Thank you for making me point it out again. I mean these other components are there, of course, and if you read Jim Watson’s book The Double Helix, you might think that getting a Nobel Prize is everything. However, this would be a grievous misconception.

Q: How many scientists on a desert island would do science for their own benefit?

A: Even Molloy would. But not for their benefit. He doesn’t do it for his benefit. He does it compulsively. I think we all do. No, I take it back. Maybe not. It’s a difficult question to answer because most of us are so dulled in our sensitivities that we may be quite incapable of any such complicated argument or reasoning or have the amount of relaxation that this man had Of course, he had to be able to sit there for hours on the beach and dream up interminable martingales. U you put people on a desert island probably quite a few of them would dream up interminable martingales and be satisfied with finding something that works.

Q: I wonder if the one place where this parallel between Molloy and other scientists doesn’t hold is that Molloy doesn’t seem to have any intentions of communicating his results to anyone else, so I would ask you, do you think. Einstein would have done his work if he had had no intention of publishing the results? And a personal question: Would you have done science if you had thought no one would be interested in the results?

A: No, certainly not. In this first essay, from which I quoted, by the Scripps girl, it said that they are playing animals. Scientists are playing animals- They not only play alone but they also play together, and if they are not too morose, they actually prefer to play together. And most scientists do prefer to play together. And in the case of Einstein of course, he would never have heard of Michelson and Morley if he had not been in communication. No, a great joy of the business is communication. All I wanted to point out is the obsessive component of the immediate act of doing science. The channeling of this component toward the erection of a large structure, the institutionalization of it, that is a creation by society, and that is something different. That is not a primary characteristic of Homo scientificus. 

Link: On Testicles

Soccer fans call it brave goalkeeping, the act of springing into a star shape in front of an attacker who is about to kick the ball as hard as possible toward the goal. As I shuffled from the field, bent forward, eyes watering, waiting for the excruciating whack of pain in my crotch to metamorphose into a gut-wrenching ache, I thought only stupid goalkeeping. But after the fourth customary slap on the back from a teammate chortling, “Hope you never wanted kids, pal,” I thought only stupid, stupid testicles.

Natural selection has sculpted the mammalian forelimb into horses’ front legs, dolphins’ fins, bats’ wings, and my soccer ball-catching hands. Why, on the path from the primordial soup to us curious hairless apes, did evolution house the essential male reproductive organs in an exposed sac? It’s like a bank deciding against a vault and keeping its money in a tent on the sidewalk.

Some of you may be thinking that there is a simple answer: temperature. This arrangement evolved to keep them cool. I thought so, too, and assumed that a quick glimpse at the scientific literature would reveal the biological reasons and I’d move on. But what I found was that the small band of scientists who have dedicated their professional time to pondering the scrotum’s existence are starkly divided over this so-called cooling hypothesis.

Reams of data show that scrotal sperm factories, including our own, work best a few degrees below core body temperature. The problem is, this doesn’t prove cooling was the reason that testicles originally descended. It’s a straight-up chicken-and-egg situation—did testicles leave the kitchen because they couldn’t stand the heat, or do they work best in the cold because they had to leave the body?

Vital organs that work optimally at 98.5 degrees Fahrenheit get bony protection: My brain and liver are shielded by skull and ribs, and my girlfriend’s ovaries are defended by her pelvis. Forgoing skeletal protection is dangerous. Each year, thousands of men go to the hospital with ruptured testes or torsions caused by having this essential organ suspended chandelierlike on a flexible twine of tubes and cords. But having exposed testicles as an adult is not even the most dangerous aspect of our reproductive organs’ arrangement.

The developmental journey to the scrotum is treacherous. At eight weeks of development, a human fetus has two unisex structures that will become either testicles or ovaries. In girls, they don’t stray far from this starting point up by the kidneys. But in boys, the nascent gonads make a seven-week voyage across the abdomen on a pulley system of muscles and ligaments. They then sit for a few weeks before coordinated waves of muscular contractions force them out through the inguinal canal.

The complexity of this journey means that it frequently goes wrong. About 3 percent of male infants are born with undescended testicles, and although often this eventually self-corrects, it persists in 1 percent of 1-year-old boys and typically leads to infertility.

Excavating the inguinal canal also introduces a significant weakness in the abdominal wall, a passage through which internal organs can slip. In the United States, more than 600,000 surgeries are performed annually to repair inguinal hernias—the vast majority of them in men.

This increased risk of hernias and sterilizing mishaps seems hardly in keeping with the idea of evolution as survival of the fittest. Natural selection’s tagline reflects the importance of attributes that help keep creatures alive—not dying being an essential part of evolutionary success. How can a trait such as scrotality (to use the scientific term for possessing a scrotum), with all the obvious handicaps it confers, fit into this framework? Its story is certainly going to be less straightforward than the evolution of a cheetah’s leg muscles. Most investigators have tended to think that the advantages of this curious anatomical arrangement must come in the shape of improved fertility. But this is far from proven.

When considering any evolved characteristic, good first questions are who has it and who had it first. In birds, reptiles, fish, and amphibians, male gonads are internal. The scrotum is a curiosity unique to mammals. A recent testicle’s-eye view of the mammalian family tree revealed that the monumental descent occurred pretty early in mammalian evolution. And what’s more, the scrotum was so important that it evolved twice.

The first mammals lived about 220 million years ago. The most primitive living mammals are the duck-billed platypus and its ilk—creatures with key mammalian features such as warm blood, fur, and lactation (the platypus kind of sweats milk rather than having tidy nipples), although they still lay eggs like the ancestors they share with reptiles. Platypus testicles, and almost certainly those of all early mammals, sit right where they start life, safely tucked by the kidneys.

About 70 million years later, marsupials evolved, and it is on this branch of the family tree that we find the first owner of a scrotum. Nearly all marsupials today have scrotums, and so logically the common ancestor of kangaroos, koalas, and Tasmanian devils had the first. Marsupials evolved their scrotum independently from us placental mammals, which is known thanks to a host of technical reasons, the most convincing of which is that it’s back-to-front. Marsupials’ testicles hang in front of their penises.

Fifty million years after the marsupial split is the major fork in the mammalian tree, scrotally speaking. Take a left and you will encounter elephants, mammoths, aardvarks, manatees, and groups of African shrew- and mole-like creatures. But you will never see a scrotum—all of these placental animals, like platypuses, retain their gonads close to their kidneys.

However, take a right, to the human side of the tree, at this 100 million-year-old juncture and you’ll find descended testicles everywhere. Whatever they’re for, scrotums bounce along between the hind limbs of cats, dogs, horses, bears, camels, sheep, and pigs. And, of course, we and all our primate brethren have them. This means that at the base of this branch is the second mammal to independently concoct scrotality—the one to whom we owe thanks for our dangling parts being, surely correctly, behind the penis.

Between these branches, however, is where it gets interesting, for there are numerous groups, our descended but ascrotal cousins, whose testes drop down away from the kidneys but don’t exit the abdomen. Almost certainly, these animals evolved from ancestors whose testes were external, which means at some point they backtracked on scrotality, evolving anew gonads inside the abdomen. They are a ragtag bunch including hedgehogs, moles, rhinos and tapirs, hippopotamuses, dolphins and whales, some seals and walruses, and scaly anteaters.

For mammals that returned to the water, tucking everything back up inside seems only sensible; a dangling scrotum isn’t hydrodynamic and would be an easy snack for fish attacking from below. I say snack, but the world record-holders, right whales, have testicles that tip the scales at more than 1,000 pounds apiece. The trickier question, which may well be essential for understanding its function, is why did the scrotal sac lose its magic for terrestrial hedgehogs, rhinos, and scaly anteaters?

The scientific search to explain the scrotum’s raison d’être began in England in the 1890s at Cambridge University. Joseph Griffiths, using terriers as his unfortunate subjects, pushed their testicles back into their abdomens and sutured them there. As little as a week later, he found that the testes had degenerated, the tubules where sperm production occurs had constricted, and sperm were virtually absent. He put this down to the higher temperature of the abdomen, and the cooling hypothesis was born.

In the 1920s, a time when Darwin’s ideas were rapidly spreading, Carl Moore at the University of Chicago argued that after mammals had transitioned from cold- to warm-blooded, keeping the body in the mid-to-high 90 degrees must have severely hampered sperm production, and the first males to cool things off with a scrotum became the more successful breeders.

Heat disrupts sperm production so effectively that biology textbooks and medical tracts alike give cooling as the reason for the scrotum. The problem is many biologists who seriously think about animal evolution are unhappy with this. Opponents say that testicles function optimally at cooler temperatures because they evolved this trait after their exile.

If mammals became warm-blooded 220 million or so years ago, it would mean mammals carried their gonads internally for more than 100 million years before the scrotum made its bow. The two events were hardly tightly coupled.

The hypothesis’ biggest problem, though, is all the sacless branches on the family tree. Regardless of their testicular arrangements, all mammals have elevated core temperatures. If numerous mammals lack a scrotum, there is nothing fundamentally incompatible with making sperm at high temperatures. Elephants have a higher core temperature than gorillas and most marsupials. And beyond mammals it gets worse: Birds, the only other warm-blooded animals, have internal testes despite having core temperatures that in some species run to 108 degrees.

Any argument for why cooling would be better for sperm has to say exactly why. The idea that a little less heat might keep sperm DNA from mutating has been proposed, and recently it’s been suggested that keeping sperm cool may allow the warmth of a vagina to act as an extra activating signal. But these ideas still fail to surmount the main objections to the cooling hypothesis.

Michael Bedford of Cornell Medical College is no fan of the cooling hypothesis applied to testicles, but he does wonder whether having a cooled epididymis, the tube where sperm sit after leaving their testicular birthplace, might be important. (Sperm are impotent on exiting the testes and need a few final modifications while in the epididymis.) Bedford has noted that some animals with abdominal testes have extended their epididymis to just below the skin, and that some furry scrotums have a bald patch for heat loss directly above this storage tube. But if having a cool epididymis is the main goal, why throw the testicles out with it?

Link: The New Dark Ages, Part I: From Religion to Ethnic Nationalism and Back Again

European Historians have long eschewed the term “Dark Ages.” Few of them still use it, and many of them shiver when they encounter it in popular culture. Scholars rightly point out that the term, popularly understood as connoting a time of death, ignorance, stasis, and low quality of life, is prejudiced and misleading.

And so my apologies to them as I drag this troublesome phrase to center stage yet again, offering a new variation on its meaning.

In this essay I am taking the liberty of modifying the tem “Dark Ages” and applying to a modern as well as a historical context. I use it to refer to a general culture of fundamentalism permeating societies, old and new. By “Dark Age” I mean to describe any large scale effort to dim human understanding by submerging it under a blanket of fundamentalist dogma. And far from Europe of 1,500 years ago, my main purpose is to talk about far more recent matters around the world.

Life is, of course, a multi-faceted affair. The complex relationships among individuals and between individuals and societies produce a host of economic, cultural, political, and social manifestations. But one of the defining characteristics of the European Dark Ages, as I am now using the term, was the degree to which those multi-faceted aspects of the world were flattened by religious theology and dogma. As the Catholic Church grew in power and spread across Europe from roughly 500-1500, it was able, at least to some degree, to sublimate political, cultural, social, and economic understanding and action under its dogmatic authority. In many realms of life far beyond religion, forms of knowledge and action were subject to theological sanction.

Those who take pride in Western civilization, or even those like myself who don’t necessarily, but who simply acknowledge its various achievements alongside its various shortcomings, recognize a series of factors that led to those achievements. Some of those factors, such as colonialism, are horrific. Some, like the growth of secular thought, are more admirable.

Not that secular thought in and of itself is intrinsically laudable; maybe it is, though I don’t think so. But rather, that the rise of secular thought enabled Europe, over the course of centuries, to throw off it’s own self-imposed yoke of religious absolutism. And that freeing itself in this way was one of the factors spurring Europe’s many impressive achievements over the last half-millennium.

Most denizens of what was once known as the Christian world, including various colonial offshoots such as the United States and Australia, now accept and even take for granted a multi-faceted conception of life and human interaction. For most of them, including many of the religious ones, it is a given that moving away from a world view flattened by religion, at the very least, facilitated the development of things like science and the modern explosion of wealth. Of course the move from a medieval to a modern mind set also unleashed a variety of problems; but on balance, relatively few Westerners would willingly return to any version of medieval Christian theocracy.1

This confidence in a modern vision of human life and society, which acknowledges that religion, like science, politics, economics, culture and countless other facets, each have a role to play and that none should squeeze out the rest, can lead Westerners to look down their noses at those societies which are currently flattened by religion, or struggling to avoid it. Too many Westerners, either with sneers or pity, look askance at other parts of the world where such battles are currently being waged.

Fundamentalist Muslims in a number of countries are literally fighting to assert a theocratic vision over hundreds of millions of people. And though much smaller numerically and not plagued by civil war, Israel likewise suffers from a deep divide between ultra-Orthodox Jews who want religion to dominate most if not all aspects of Israeli life, and those Jews, both religious and not, who embrace a more secular vision for their state in which those divisions will continue to be respected.

When contrasting the West to places mired in such struggle, it becomes oh, so easy for those of us in the United States, Europe, and other parts of the former Christian world to smugly assert that we moved beyond such theocratic perils some time ago and we simply shan’t be returning. It is tempting for some to see history as an irregular but fairly steady linear advancement, progressing forward. This allows people to frame the secular West as winning some kind of race and as superior to, say, the Middle East, which many suppose is “still” struggling to achieve secularism.

But to think that the West has permanently moved past such Dark Ages, never again to return, is just as big a mistake as failing to realize that some of the societies now struggling to avoid a religious Dark Age have in fact been very secular in the recent past.

Such assumptions are not only mistaken but dangerous. The reality is that there are no guarantees about history except that it is dynamic. Things always change. And change does not occur in some neat, linear pattern, which is precisely why you cannot predict historical change.

Link: Crimes Against Humanities

Leon Wieseltier responds to Steven Pinker’s on scientism.

The question of the place of science in knowledge, and in society, and in life, is not a scientific question. Science confers no special authority, it confers no authority at all, for the attempt to answer a nonscientific question. It is not for science to say whether science belongs in morality and politics and art. Those are philosophical matters, and science is not philosophy, even if philosophy has since its beginnings been receptive to science. Nor does science confer any license to extend its categories and its methods beyond its own realms, whose contours are of course a matter of debate. The credibility of physicists and biologists and economists on the subject of the meaning of life—what used to be called the ultimate verities, secularly or religiously constructed—cannot be owed to their work in physics and biology and economics, however distinguished it is. The extrapolation of larger ideas about life from the procedures and the conclusions of various sciences is quite common, but it is not in itself justified; and its justification cannot be made on internally scientific grounds, at least if the intellectual situation is not to be rigged. Science does come with a worldview, but there remains the question of whether it can suffice for the entirety of a human worldview. To have a worldview, Musil once remarked, you must have a view of the world. That is, of the whole of the world. But the reach of the scientific standpoint may not be as considerable or as comprehensive as some of its defenders maintain.

None of these strictures about the limitations of science, about its position in nonscientific or extra-scientific contexts, in any way impugns the integrity or the legitimacy or the necessity or the beauty of science. Science is a regular source of awe and betterment. No humanist in his right mind would believe otherwise. No humanist in his right mind would believe otherwise. Science is plainly owed this much support, this much reverence. This much—but no more. In recent years, however, this much has been too little for certain scientists and certain scientizers, or propagandists for science as a sufficient approach to the natural universe and the human universe. In a world increasingly organized around the dazzling new breakthroughs in science and technology, they feel oddly besieged.

They claim that science is under attack, and from two sides. The first is the fundamentalist strain of Christianity, which does indeed deny the truth of certain proven scientific findings and more generally prefers the subjective gains of personal rapture to the objective gains of scientific method. Against this line of attack, even those who are skeptical about the scientizing enterprise must stand with the scientists, though it is important to point out that the errors of religious fundamentalism must not be mistaken for the errors of religion. Too many of the defenders of science, and the noisy “new atheists,” shabbily believe that they can refute religion by pointing to its more outlandish manifestations. Only a small minority of believers in any of the scriptural religions, for example, have ever taken scripture literally. When they read, most believers, like most nonbelievers, interpret. When the Bible declares that the world was created in seven days, it broaches the question of what a day might mean. When the Bible declares that God has an arm and a nose, it broaches the question of what an arm and a nose might mean. Since the universe is 13.8 billion years old, a day cannot mean 24 hours, at least not for the intellectually serious believer; and if God exists, which is for philosophy to determine, this arm and this nose cannot refer to God, because that would be stupid.

Interpretation is what ensues when a literal meaning conflicts with what is known to be true from other sources of knowledge. As the ancient rabbis taught, accept the truth from whoever utters it. Religious people, or many of them, are not idiots. They have always availed themselves of many sources of knowledge. They know about philosophical argument and figurative language. Medieval and modern religious thinking often relied upon the science of its day. Rationalist currents flourished alongside anti-rationalist currents, and sometimes became the theological norm. What was Jewish and Christian and Muslim theology without Aristotle? When a dissonance was experienced, the dissonance was honestly explored. So science must be defended against nonsense, but not every disagreement with science, or with the scientific worldview, is nonsense. The alternative to obscurantism is not that science be all there is.

The second line of attack to which the scientizers claim to have fallen victim comes from the humanities. This is a little startling, since it is the humanities that are declining in America, not least as a result of the exaggerated glamour of science. But some scientists and some scientizers feel prickly and self-pitying about the humanistic insistence that there is more to the world than science can disclose. It is not enough for them that the humanities recognize and respect the sciences; they need the humanities to submit to the sciences, and be subsumed by them. The idea of the autonomy of the humanities, the notion that thought, action, experience, and art exceed the confines of scientific understanding, fills them with a profound anxiety. It throws their totalizing mentality into crisis. And so they respond with a strange mixture of defensiveness and aggression. As people used to say about the Soviet Union, they expand because they feel encircled.

A few weeks ago this magazine published a small masterpiece of scientizing apologetics by Steven Pinker, called “Science Is Not Your Enemy.” Pinker utters all kinds of sentimental declarations about the humanities, which “are indispensable to a civilized democracy.” Nobody wants to set himself against sensibility, which is anyway a feature of scientific work, too. Pinker ranges over a wide variety of thinkers and disciplines, scientific and humanistic, and he gives the impression of being a tolerant and cultivated man, which no doubt he is. But the diversity of his analysis stays at the surface. His interest in many things is finally an interest in one thing. He is a foxy hedgehog. His essay, a defense of “scientism,” is a long exercise in assimilating humanistic inquiries into scientific ones. By the time Pinker is finished, the humanities are the handmaiden of the sciences, and dependent upon the sciences for their advance and even their survival.

Pinker tiresomely rehearses the familiar triumphalism of science over religion: “the findings of science entail that the belief systems of all the world’s traditional religions and cultures … are factually mistaken.” So they are, there on the page; but most of the belief systems of all the world’s traditional religions and cultures have evolved in their factual understandings by means of intellectually responsible exegesis that takes the progress of science into account; and most of the belief systems of all the world’s traditional religions and cultures are not primarily traditions of fact but traditions of value; and the relationship of fact to value in those traditions is complicated enough to enable the values often to survive the facts, as they do also in Aeschylus and Plato and Ovid and Dante and Montaigne and Shakespeare. Is the beauty of ancient art nullified by the falsity of the cosmological ideas that inspired it? I would sooner bless the falsity for the beauty. Factual obsolescence is not philosophical or moral or cultural or spiritual obsolescence. Like many sophisticated people, Pinker is quite content with a collapse of sophistication in the discussion of religion.

Yet the purpose of Pinker’s essay is not chiefly to denounce religion. It is to praise scientism. Rejecting the various definitions of scientism—“it is not an imperialistic drive to occupy the humanities,” it is not “reductionism,” it is not “naïve”—Pinker proposes his own characterization of scientism, which he defends as an attempt “to export to the rest of intellectual life” the two ideals that in his view are the hallmarks of science. The first of those ideals is that “the world is intelligible.” The second of those ideals is that “the acquisition of knowledge is hard.” Intelligibility and difficulty, the exclusive teachings of science? This is either ignorant or tendentious. Plato believed in the intelligibility of the world, and so did Dante, and so did Maimonides and Aquinas and Al-Farabi, and so did Poussin and Bach and Goethe and Austen and Tolstoy and Proust. They all share Pinker’s denial of the opacity of the world, of its impermeability to the mind. They all join in his desire to “explain a complex happening in terms of deeper principles.” They all concur with him that “in making sense of our world, there should be few occasions in which we are forced to concede ‘It just is’ or ‘It’s magic’ or ‘Because I said so.’ ” But of course Pinker is not referring to their ideals of intelligibility. The ideal that he has in mind is a very particular one. It is the ideal of scientific intelligibility, which he disguises, by means of an inoffensive general formulation, as the whole of intelligibility itself.

If Pinker believes that scientific clarity is the only clarity there is, he should make the argument for such a belief. He should also acknowledge its narrowness (though within the realm of science it is very wide), and its straitening effect upon the investigation of human affairs. Instead he simply conflates scientific knowledge with knowledge as such. In his view, anybody who has studied any phenomena that are studied by science has been a scientist. It does not matter that they approached the phenomena with different methods and different vocabularies. If they were interested in the mind, then they were early versions of brain scientists. If they investigated human nature, then they were social psychologists or behavioral economists avant la lettre. Pinker’s essay opens with the absurd, but immensely revealing, contention that Spinoza, Locke, Hume, Rousseau, Kant, and Smith were scientists. It is true that once upon a time a self-respecting intellectual had to be scientifically literate, or even attempt a modest contribution to the study of the natural world. It is also true that Kant, to choose but one of Pinker’s heroes of science, made some astronomical discoveries in his early work; but Kant’s significant contributions to our understanding of mind and morality were plainly philosophical, and philosophy is not, and was certainly not for Kant, a science. Perhaps one can be a scientist without being aware that one is a scientist. What else could these thinkers have been, for Pinker? If they contributed to knowledge, then they must have been scientists, because what other type of knowledge is there? For all its geniality, Pinker’s translation of nonscientific thinking into science is no less strident a constriction than, say, Carnap’s colossally parochial dictum that “there is no question whose answer is in principle unattainable by science.” His ravenous intellectual appetite notwithstanding, Pinker is finally in the same reductionist racket. (The R-word!) He sees many locks but only one key.

The translation of nonscientific discourse into scientific discourse is the central objective of scientism. It is also the source of its intellectual perfunctoriness. Imagine a scientific explanation of a painting—a breakdown of Chardin’s cherries into the pigments that comprise them, and a chemical analysis of how their admixtures produce the subtle and plangent tonalities for which they are celebrated. Such an analysis will explain everything except what most needs explaining: the quality of beauty that is the reason for our contemplation of the painting. Nor can the new “vision science” that Pinker champions give a satisfactory account of aesthetic charisma. The inadequacy of a scientistic explanation does not mean that beauty is therefore a “mystery” or anything similarly occult. It means only that other explanations must be sought, in formal and iconographical and emotional and philosophical terms.

Link: Philosophy from the Preposterous Universe

Sean Carroll is the uber-chillin’ philosophical physicist who investigates how the preposterous universe works at a deep level, who thinks spats between physics and philosophy are silly, who thinks a wise philosopher will always be willing to learn from discoveries of science, who asks how we are to live if there is no God, who is comfortable with naturalism and physicalism, who thinks emergentism central, that freewill is a crucial part of our best higher-level vocabulary, that there aren’t multiple levels of reality, which is quantum based not relativity based, is a cheerful realist, disagrees with Tim Maudlin about wave functions and Craig Callender about multiverses, worries about pseudo-scientific ideas and that the notion of ‘domains of applicability’ is lamentably under-appreciated. Stellar!

3am: You’re a physicist with philosophical interests and skill. How did this begin?

Sean Carroll: My own interests in physics and philosophy certainly stem from a common origin – I’m curious about how the world works at a deep level. I got interested in physics at a fairly young age, reading books from the local public library about relativity and particle physics. I didn’t discover philosophy in any serious way until I went to college. It was a good Catholic school (Villanova), at which every arts & sciences major was required to take three semesters of philosophy (as well as three semesters of religious studies, which could be fairly philosophical if you took the right courses). I really enjoyed it and ended up getting a philosophy minor. As a grad student in astrophysics at Harvard, I sat in on courses with John Rawls and Robert Nozick. Rawls in particular was a great person to talk to, although we almost never discussed philosophy because he had so many questions about physics and cosmology.

3:AM: I thought we’d start by getting your overview of the situation as you see it regarding the relationship between physics and philosophy. There have been some high profile and rather bad tempered disagreements recently between the two camps – I’m thinking of the Kraus vs Albert recently which led to an invitation for Albert to share a platform with Krauss at an event being pulled, and Hawking and Mlodinow who start off their book ‘The Grand Design’ by announcing the death of philosophy – so I wondered if there was any general points that such cases helped illustrate for you about the relationship, in particular, whether there is some truth in the thought that physics has such an elevated status in the general culture reflected (as reflected in both popular culture eg The Big Bang Theory and funding eg it gets LHC machines built) that it feels itself impervious to criticism?

SC: From inside physics, it hardly seems like we are impervious to criticism! Funding is being cut, our ability to do big projects is running up against problems of finance and international cooperation, and it’s a struggle to explain the importance of increasingly abstract basic research. Much of this feeling is a matter of historical context, of course; fifty years ago physicists were at the top of the heap, a position that is increasingly occupied by biologists (or maybe economists?). But anyone paying attention can tell that there is still immense public interest in discoveries like dark energy and the Higgs boson, and a great deal of respect for physics as a profession.

The public spat between physics and philosophy is just silly, more a matter of selling books or being lazy than any principled intellectual position. Most physicists know very little about philosophy, which is hardly surprising; most experts in any one academic field don’t know very much about many other fields. This ignorance manifests itself in a couple of ways. First, a lot of scientists are quite comfortable with simplistic philosophy of science. This usually doesn’t matter, but there are cases where good philosophy has something to offer, and scientists rarely put in the work necessary to understand what that good philosophy has to say. Second, scientists tend to think of philosophy as a service discipline – what good does it do for my practice of science? The answer is almost always “no good at all,” which they then translate into thinking that philosophy has no real purpose. The truth is that almost all scientific work can proceed quite happily without philosophy – you can be very good at driving a car without knowing how an engine works. But when it’s important, philosophy very important indeed.

Very few philosophers, by contrast, are going to accuse science of being worthless. Nevertheless, it’s no surprise that there are problems of appreciation and understanding flowing in that direction as well. The only remedy, if one is interested in finding one, is constant interaction and communication.

My own default position is that respectable people in other academic fields probably have something interesting to say, even if I don’t immediately understand it. Not always true, of course – there are pockets of nonsense within every discipline. But the less I understand about the basics of some field, the less likely I am to start declaring it to be useless and antiquated.

3:AM: I guess the key issue from the philosophers about the two cases I mentioned above was that the scientists were discussing philosophical issues but weren’t doing it well. Krauss gave the impression that he was answering the philosophical question ‘why is there something rather than nothing?’ but he wasn’t was he? A fall-back position seems to have been taken – I noticed something like this came up in your ‘Moving Naturalism Forward’ conference – that says that the issue he was addressing was a better one than the philosophers! So what do you think – does physics say anything helpful about the philosopher’s question – which I take to be a logical objection to the idea that something can create itself out of nothing pace Aquinas – and if it doesn’t does that mean that we should start wondering about the relationship of physics to logic?

SC: I’m not sure if one version of the “something from nothing” question is better than any other version, and I’m not even sure there is a single thing we would agree on to call “the philosophers’ question”! But I do know that disentangling these kinds of issues – taking an ill-posed question that nevertheless is based on something real and interesting and figuring out the ways in which it might be translated into something meaningful – is exactly where good philosophy can be helpful. There’s no question that a philosopher who has thought about this issue will have an enormously more nuanced and comprehensive understanding of what it means than your typical working cosmologist.

On the other hand, there’s also very little question that any serious philosophical approach to the question needs to be informed by the best physics to which we have access. The idea that there is a “logical” objection to the creation of something from nothing clearly hinges crucially on one’s conception of what “something” and “nothing” are, and that’s something (as it were) that physics can usefully inform. Not that physics can simply come in and definitively answer it – as per usual, there will be different possible definitions, with correspondingly different answers to the original question.

There’s an important point here worth emphasizing. Science has an enormous advantage over other disciplines when it comes to making progress: namely, the direct confrontation with data forces scientists to be more imaginative (and flexible) than they might otherwise bother to be. As a result, scientists often end up with theories that are extremely surprising from the point of view of everyday intuition. A philosopher might come up with a seemingly valid a priori argument for some conclusion, only to have that conclusion overthrown by later scientific advances. In retrospect, we will see that there was something wrong about the original argument. But the point is that seeing such wrongness can be really hard if all we have to lean on is our ability to reason. Science has data in addition to reason, which is the best cure for sloppy thinking. So in principle it might be possible for a very rigorous metaphysician to be so careful that everything they say is both true and useful; in practice, we human beings are not so smart, and a wise philosopher will always be willing to learn things from the discoveries of science.

3:AM: Your ‘Moving Naturalism Forward’ conference brought together a super-powered bunch of naturalist philosophers and scientists to discuss a naturalistic world view. Before discussing aspects of it, can you say what made you think this would be a good idea for a conference? Were there highlights for you?

SC: In the contemporary intellectual climate, especially in the U.S., there has been a great deal of argumentation between atheists/naturalists on one hand and religious believers on the other. Which is fine as far as it goes, but within the set of folks who are already comfortable with naturalism, it doesn’t really help answer all of the crucially important questions that the naturalist position bequeaths to us. (Free will, emergence, meaning, morality, you can easily think of them yourself.) And we disagree in serious/interesting way about the answers to these questions!

So I thought it would be useful to bring together a group of people from a variety of disciplines who already agreed on the basic tenets of naturalism, and have a wide-ranging discussion that didn’t involve bashing religion or defending ourselves against it. In some sense, “Does God exist?” is the easy question; the hard question is, “Given that God isn’t here to give us instructions, how are we going to live our lives?”

I think most of the participants would agree that we didn’t answer our questions in any definitive way, but it was incredibly useful to hear the variety of thoughtful opinions among a group of smart people who agree on the basic ontology. We wanted to keep the discussions small and informal, with formal presentations limited to an absolute minimum, which is without question the best kind of workshop you can have. But we also made a bit more effort than usual in recording the sessions – hiring a videographer who knew what he was doing and had worked with similar groups before, making sure there were enough cameras and microphones to get good-quality recordings. So even though the group itself was small, the results are available to anyone who is interested.

Link: Free Will, Determinism, Quantum Theory, and Statistical Fluctuations: A Physicist's Take

Any attempt to link this discussion to moral, ethical or legal issues, as is often been done, is pure nonsense. The fact that it is possible to say that a criminal has been driven to kill because of the ways in which Newton’s laws have acted on the molecules of his body has nothing to do either with the opportunity of punishment, nor with the moral condemnation. It is respecting those same laws by Newton that putting criminals in jail reduces the murders, and it is respecting those same laws by Newton that society as a whole functions, including its moral structure, which in turn determines behavior.  There is no contradiction between saying that a stone flew into the sky because a force pushed it, or because a volcano exploded.  In the same manner, there is no contradiction in saying we do not commit murder because something is encoded in the decision-making structure of our brain or because we are bound by a moral belief.

Free will has nothing to do with quantum mechanics. We are deeply unpredictable beings, like most macroscopic systems. There is no incompatibility between free will and microscopic determinism.  The significance of free will is that behavior is not determined by external constraints, not by the psychological description of our neural states to which we access. The idea that free will may have to do with the ability to make different choices on equal internal states is an absurdity, as the ideal experiment I have described above shows. The issue has no bearing on questions of a moral or legal nature. Our idea of being free is correct, but it is just a way to say that we are ignorant on why we make choices.

Since Democritus suggested that the world can be seen as the result of accidental clashing of atoms, the question of free will has disturbed the sleeps of the naturalist: how to reconcile the deterministic dynamics of the atoms with man’s freedom to choose? Modern physics has altered the data a bit, and the ensuing confusion requires clarification. 

Democritus assumed the movement of atoms to be deterministic: a different future does not happen without a different present. But Epicurus, who in physical matters was a close follower of Democritus, had already perceived a difficulty between this tight determinism and human freedom, and modified the physics of Democritus, introducing an element of indeterminism at the atomic level.

The new element was called “clinamen.” The “clinamen” is a minimum deviation of an atom from its natural rectilinear path, which takes place in a completely random fashion. Lucretius, who presents the Democritean-Epicurean theory in his poem, “De Rerum Natura”, “On Things Of Nature,” notes in poetic words: the deviation from straight motion happens “uncertain tempore … incertisque loci “, in an uncertain time and an uncertain place [Liber II, 218].

A very similar oscillation between determinism and indeterminism has happened again in modern physics. Newton’s atomism is deterministic in a similar manner as Democritus’s.  But at the beginning of the twentieth century, Newton’s equations have been replaced by those of quantum theory, which bring back an element of indeterminism, quite similar, in fact, to Epicurus’s correction of Democritus’s determinism. At the atomic scale, the motion of the elementary particles is not strictly deterministic.

Can there be a relationship between this atomic-scale quantum indeterminism and human freedom to choose?

The idea has been proposed, and often reappears, but is not credible, for two reasons. The first is that the indeterminism of quantum mechanics is governed by a rigorous probabilistic dynamics.  The equations of quantum mechanics do not determine what will happen, but determine strictly the probability of what will happen. In other words, they certify that the violation of determinism is strictly random. This goes in exactly the opposite direction from human freedom to choose. If human freedom to choose was reducible to quantum indeterminism, then we should conclude that human choices are strictly regulated by the chance. Which is the opposite of the idea of freedom of choice. The indeterminism of quantum mechanics is like throwing a coin to see if it falls heads or tails, and act accordingly. This is not this what we call freedom to choose. 

But there is a second, and more important, consideration. If an element of randomness is sufficient to account for free will, there is no need to search it into quantum uncertainty, because in a complex open system such as a human being there are already many sources of uncertainty, entirely independent of quantum mechanics. The microscopic atomic dynamic inside of a man is influenced by countless random events: just consider the fact that it occurs at room temperature, where the thermal motion of the molecules is completely random. The water that fills the molecules of our body and our brain is a source of indeterminism for the simple fact of being hot, and this indeterminism is much higher than the quantum one. If you add to this the fact that quantum indeterminism has a well-known tendency to disappear extremely fast as soon as you consider macroscopic objects (due to ”decoherence”), it seems clear that trying to bind human freedom and quantum indeterminism is a very improbable hope. 

This brings us back to the starting point. The problem of the apparent tension between free will and determinism is not relieved by quantum physics. The argument, however, highlights a flaw in the intuition from which the problem itself originates. If the macroscopic dynamics is subjected to the consequences of microscopic indeterminism such as the thermal one, what is the exact nature of the problem of free will?

Clearly the problem is to clarify what it means to be free to choose. Let us get closer to the core of the problem from the other side: not from that of physics, but from our freedom. I can decide whether to declare or not some revenues to the IRS. This is a free choice. What does this mean? First it means that I am not forced to make a choice by external constraints. For example, there is no law that states that I get the money only after I have declared it. If so I would have no choice. Secondly, there is no IRS inspector watching me, in which case I would not have choice either. I am free to choose to be honest or dishonest. We have countless choices of this kind, not only ethical, but also in the daily management of our life.

What happens when I choose? It happens that I evaluate the pros and cons of a choice in my thoughts, all the factors that can determine it. These can be external (if they catch me I’m in trouble), internal (I want to be an honest guy), accidental (now I’m in trouble with money and these fifty dollar more …), emotional (I just saw a TV show on those not paying taxes and I am disgusted by those people), and so on.

There is therefore a first sense of the expression “be free to choose” which simply refers to the fact that the determining factors are internal and not external. This causes no conflict with determinism. Here is an example, from Daniel Dennett, to clarify this point. The Rover (the machine with wheels) sent to Mars a few months ago is programmed to move autonomously on Mars, and has a complex navigation system that analyzes its surroundings and decides where to move according to a set of assigned priorities. Say to make longer journeys, in order to explore different regions and send the images to Earth. However, the Rover can end up in a situation where it can no longer move, for example because is got stuck between two boulders. Or, scientists from the control center on Earth may decide not to leave the Rover’s program to decide by itself, and to intervene and compel the Rover to go back. For example because they have independent observations of a dust storm approaching. In either case, we can say that the Rover is “no longer free” to go where it wants, because it is stuck between two rocks, or because the engineers at NASA have sent a radio control that blocks the freedom of decision of the program on board.  After the sand storm is over or after being freed from the two boulders, the Rover regains its “freedom to decide” and begins to run only on its own “choices” of where to go.

This is a particular sense of the expression “to be free to decide.” We often use this expression in this sense. For example: I am not free to decide to go for a walk if I am in prison. This sense of “being free” is the most common, and is not in conflict with physical determinism. After all, the Rover, once freed from the rocks and freed from the radio controls from NASA, becomes free to decide for itself where to go, but the program that runs it is driven by strictly deterministic physics. In this case, to “be free” only refers to the distinction between determinations of behavior that are external (the boulders, the radio controls of NASA, the prison) and determinations of behavior that are internal (the software of the Rover, my intense desire to take a walk). From this point of view, the problem of the conflict between free will and physical determinism dissolves completely, and this is the solution of the problem today proposed by many intellectuals, including, for example, Daniel Dennett.

Is this a complete and satisfactory solution of the problem? Maybe not, because there are issues that remain open. The first one is that the analogy between the rover and a human being does not fully hold. A human being seems to be, and probably is, more “free” than the Rover in the following sense. Both, the Rover and the human being, can be free in the sense that the decision on the behavior is determined by factors internal and not external, but in the case of the Rover we know that there is a precise software that determines this behavior. This software was built (by engineers) in order to be as “deterministic” as possible. Sure, it can break or malfunction, but this causes the behavior of the Rover to be consider abnormal. As long as problems do not occur, and the Rover works well, its behavior is determined in a rigorous manner, by factors within the Rover itself, but still factors that make a strict deterministic structure. Can we say the same of man?

Link: Beyond Recognition

The incredible story of a face transplant.

… Like the patients who came before her, Tarleton’s journey has been something of an unfathomable one. In the summer of 2007, she was the victim of a brutal attack perpetrated by her ex-husband, Herbert Rodgers. He broke into her home in the dead of night, carrying a baseball bat and a bottle of industrial-strength lye. He used both, and he didn’t stop until Tarleton had sustained what one doctor later described as “the most horrific injury a human being could suffer.”

Tarleton awoke from a three-month induced coma in September of that year. Her body, marred by deep chemical burns, was wrapped in bandages and covered in grafts — some taken from cadavers, the rest harvested from her own legs. Her eyelids were gone, as was her left ear. She couldn’t blink, smile, or breathe through her nose.

During that coma, doctors performed 38 surgeries to repair what deficits they could. And over a period of five years, she would undergo another 17 operations, including a series of synthetic corneal implants that eventually restored partial vision to one eye. Despite these efforts, Tarleton’s progress eventually stalled — given the limitations of conventional procedures, it was impossible that full facial functions, from movement to sensation, would ever return. And her face, there was no question, would never look the way it had before. “I had forgotten what it was like to look more normal,” she says. “I had to accept that I would always look this way, and I had to be okay with that.”

Ironically, it wasn’t until Tarleton had cultivated this acceptance, she says, that the prospect of a face transplant emerged. In December of 2011, she received a striking proposition from Dr. Bohdan Pomahac at Brigham and Women’s Hospital in Boston. He had recently performed the first successful full face-transplant in the US, and he wanted to know if Tarleton would consider the procedure.

It wasn’t an easy answer. Before being approved for a face transplant, Tarleton would need to travel two hours from her home in Vermont to Boston, several times over several months, for extensive physical and psychological exams. Doctors needed to be sure that her immune system could cope with the procedure, and assess the blood vessels, nerves, and muscles deep within her skull. A team of psychological experts would evaluate Tarleton’s mental health and the strength of her support network. The procedure itself would be grueling and dangerous, and the rehabilitation process would be extensive. But the payoff — the prospect of eyes that could blink, a mouth able to kiss — would transform her life.

Several months after that call, Tarleton had cleared every hurdle, and her name was added to a waitlist while surgeons scoured for viable donors. To meet the criteria, a donor had to be brain dead with no prospect for recovery — the harvested tissue needs to be flushed with blood and nutrients until the last possible moment — and be an adequate match for Tarleton’s skin tone and texture, as well as her age and sex. In her case, it took 14 months before that donor, Cheryl, was found.

Link: The Obesity Era

As the American people got fatter, so did marmosets, vervet monkeys and mice. The problem may be bigger than any of us. 

Years ago, after a plane trip spent reading Fyodor Dostoyevsky’s Notes from the Underground and Weight Watchers magazine, Woody Allen melded the two experiences into a single essay. ‘I am fat,’ it began. ‘I am disgustingly fat. I am the fattest human I know. I have nothing but excess poundage all over my body. My fingers are fat. My wrists are fat. My eyes are fat. (Can you imagine fat eyes?).’ It was 1968, when most of the world’s people were more or less ‘height-weight proportional’ and millions of the rest were starving. Weight Watchers was a new organisation for an exotic new problem. The notion that being fat could spur Russian-novel anguish was good for a laugh.

That, as we used to say during my Californian adolescence, was then. Now, 1968’s joke has become 2013’s truism. For the first time in human history, overweight people outnumber the underfed, and obesity is widespread in wealthy and poor nations alike. The diseases that obesity makes more likely — diabetes, heart ailments, strokes, kidney failure — are rising fast across the world, and the World Health Organisation predicts that they will be the leading causes of death inall countries, even the poorest, within a couple of years. What’s more, the long-term illnesses of the overweight are far more expensive to treat than the infections and accidents for which modern health systems were designed. Obesity threatens individuals with long twilight years of sickness, and health-care systems with bankruptcy.

And so the authorities tell us, ever more loudly, that we are fat — disgustingly, world-threateningly fat. We must take ourselves in hand and address our weakness. After all, it’s obvious who is to blame for this frightening global blanket of lipids: it’s us, choosing over and over again, billions of times a day, to eat too much and exercise too little. What else could it be? If you’re overweight, it must be because you are not saying no to sweets and fast food and fried potatoes. It’s because you take elevators and cars and golf carts where your forebears nobly strained their thighs and calves. How could you dothis to yourself, and to society?

Moral panic about the depravity of the heavy has seeped into many aspects of life, confusing even the erudite. Earlier this month, for example, the American evolutionary psychologist Geoffrey Miller expressed the zeitgeist in this tweet: ‘Dear obese PhD applicants: if you don’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation. #truth.’ Businesses are moving to profit on the supposed weaknesses of their customers. Meanwhile, governments no longer presume that their citizens know what they are doing when they take up a menu or a shopping cart. Yesterday’s fringe notions are becoming today’s rules for living — such as New York City’s recent attempt to ban large-size cups for sugary soft drinks, or Denmark’s short-lived tax surcharge on foods that contain more than 2.3 per cent saturated fat, or Samoa Air’s 2013 ticket policy, in which a passenger’s fare is based on his weight because: ‘You are the master of your air ‘fair’, you decide how much (or how little) your ticket will cost.’

Several governments now sponsor jauntily named pro-exercise programmes such as Let’s Move! (US), Change4Life (UK) and actionsanté (Switzerland). Less chummy approaches are spreading, too. Since 2008, Japanese law requires companies to measure and report the waist circumference of all employees between the ages of 40 and 74 so that, among other things, anyone over the recommended girth can receive an email of admonition and advice.

Hand-in-glove with the authorities that promote self-scrutiny are the businesses that sell it, in the form of weight-loss foods, medicines, services, surgeries and new technologies. A Hong Kong company named Hapilabs offers an electronic fork that tracks how many bites you take per minute in order to prevent hasty eating: shovel food in too fast and it vibrates to alert you. A report by the consulting firm McKinsey & Co predicted in May 2012 that ‘health and wellness’ would soon become a trillion-dollar global industry. ‘Obesity is expensive in terms of health-care costs,’ it said before adding, with a consultantly chuckle, ‘dealing with it is also a big, fat market.’

And so we appear to have a public consensus that excess body weight (defined as a Body Mass Index of 25 or above) and obesity (BMI of 30 or above) are consequences of individual choice. It is undoubtedly true that societies are spending vast amounts of time and money on this idea. It is also true that the masters of the universe in business and government seem attracted to it, perhaps because stern self-discipline is how many of them attained their status. What we don’t know is whether the theory is actually correct.

Of course, that’s not the impression you will get from the admonishments of public-health agencies and wellness businesses. They are quick to assure us that ‘science says’ obesity is caused by individual choices about food and exercise. As the Mayor of New York, Michael Bloomberg, recently put it, defending his proposed ban on large cups for sugary drinks: ‘If you want to lose weight, don’t eat. This is not medicine, it’s thermodynamics. If you take in more than you use, you store it.’ (Got that? It’s not complicated medicine, it’s simple physics, the most sciencey science of all.)

Yet the scientists who study the biochemistry of fat and the epidemiologists who track weight trends are not nearly as unanimous as Bloomberg makes out. In fact, many researchers believe that personal gluttony and laziness cannot be the entire explanation for humanity’s global weight gain. Which means, of course, that they think at least some of the official focus on personal conduct is a waste of time and money. As Richard L Atkinson, Emeritus Professor of Medicine and Nutritional Sciences at the University of Wisconsin and editor of the International Journal of Obesity, put it in 2005: ‘The previous belief of many lay people and health professionals that obesity is simply the result of a lack of willpower and an inability to discipline eating habits is no longer defensible.’