Sunshine Recorder

Link: Being a Better Online Reader

The science of why (at least for now) we absorb and understand less when we read digitally instead of in print.

Soon after Maryanne Wolf published “Proust and the Squid,” a history of the science and the development of the reading brain from antiquity to the twenty-first century, she began to receive letters from readers. Hundreds of them. While the backgrounds of the writers varied, a theme began to emerge: the more reading moved online, the less students seemed to understand. There were the architects who wrote to her about students who relied so heavily on ready digital information that they were unprepared to address basic problems onsite. There were the neurosurgeons who worried about the “cut-and-paste chart mentality” that their students exhibited, missing crucial details because they failed to delve deeply enough into any one case. And there were, of course, the English teachers who lamented that no one wanted to read Henry James anymore. As the letters continued to pour in, Wolf experienced a growing realization: in the seven years it had taken her to research and write her account, reading had changed profoundly—and the ramifications could be felt far beyond English departments and libraries. She called the rude awakening her “Rip van Winkle moment,” and decided that it was important enough to warrant another book. What was going on with these students and professionals? Was the digital format to blame for their superficial approaches, or was something else at work?

Certainly, as we turn to online reading, the physiology of the reading process itself shifts; we don’t read the same way online as we do on paper. Anne Mangen, a professor at the National Centre for Reading Education and Research at the University of Stavanger, in Norway, points out that reading is always an interaction between a person and a technology, be it a computer or an e-reader or even a bound book. Reading “involves factors not usually acknowledged,” she told me. “The ergonomics, the haptics of the device itself. The tangibility of paper versus the intangibility of something digital.” The contrast of pixels, the layout of the words, the concept of scrolling versus turning a page, the physicality of a book versus the ephemerality of a screen, the ability to hyperlink and move from source to source within seconds online—all these variables translate into a different reading experience.

The screen, for one, seems to encourage more skimming behavior: when we scroll, we tend to read more quickly (and less deeply) than when we move sequentially from page to page. Online, the tendency is compounded as a way of coping with an overload of information. There are so many possible sources, so many pages, so many alternatives to any article or book or document that we read more quickly to compensate. When Ziming Liu, a professor at San Jose State University whose research centers on digital reading and the use of e-books, conducted a review of studies that compared print and digital reading experiences, supplementing their conclusions with his own research, he found that several things had changed. On screen, people tended to browse and scan, to look for keywords, and to read in a less linear, more selective fashion. On the page, they tended to concentrate more on following the text. Skimming, Liu concluded, had become the new reading: the more we read online, the more likely we were to move quickly, without stopping to ponder any one thought.

The online world, too, tends to exhaust our resources more quickly than the page. We become tired from the constant need to filter out hyperlinks and possible distractions. And our eyes themselves may grow fatigued from the constantly shifting screens, layouts, colors, and contrasts, an effect that holds for e-readers as well as computers. Mary Dyson, a psychologist at the University of Reading who studies how we perceive and interact with typography and design online and in print, has found that the layout of a text can have a significant effect on the reading experience. We read more quickly when lines are longer, but only to a point. When lines are too long, it becomes taxing to move your eyes from the end of one to the start of the next. We read more efficiently when text is arranged in a single column rather than multiple columns or sections. The font, color, and size of text can all act in tandem to make our reading experience easier or more difficult. And while these variables surely exist on paper just as they do on-screen, the range of formats and layouts online is far greater than it is in print. Online, you can find yourself transitioning to entirely new layouts from moment to moment, and, each time you do so, your eyes and your reading approach need to adjust. Each adjustment, in turn, takes mental and physical energy.

The shift from print to digital reading may lead to more than changes in speed and physical processing. It may come at a cost to understanding, analyzing, and evaluating a text. Much of Mangen’s research focusses on how the format of reading material may affect not just eye movement or reading strategy but broader processing abilities. One of her main hypotheses is that the physical presence of a book—its heft, its feel, the weight and order of its pages—may have more than a purely emotional or nostalgic significance. People prefer physical books, not out of old-fashioned attachment but because the nature of the object itself has deeper repercussions for reading and comprehension. “Anecdotally, I’ve heard some say it’s like they haven’t read anything properly if they’ve read it on a Kindle. The reading has left more of an ephemeral experience,” she told me. Her hunch is that the physicality of a printed page may matter for those reading experiences when you need a firmer grounding in the material. The text you read on a Kindle or computer simply doesn’t have the same tangibility.

In new research that she and her colleagues will present for the first time at the upcoming conference of the International Society for the Empirical Study of Literature and Media, in Torino, Italy, Mangen is finding that that may indeed be the case. She, along with her frequent collaborator Jean-Luc Velay, Pascal Robinet, and Gerard Olivier, had students read a short story—Elizabeth George’s “Lusting for Jenny, Inverted” (their version, a French translation, was called “Jenny, Mon Amour”)—in one of two formats: a pocket paperback or a Kindle e-book. When Mangen tested the readers’ comprehension, she found that the medium mattered a lot. When readers were asked to place a series of events from the story in chronological order—a simple plot-reconstruction task, not requiring any deep analysis or critical thinking—those who had read the story in print fared significantly better, making fewer mistakes and recreating an over-all more accurate version of the story. The words looked identical—Kindle e-ink is designed to mimic the printed page—but their physical materiality mattered for basic comprehension.

Wolf’s concerns go far beyond simple comprehension. She fears that as we turn to digital formats, we may see a negative effect on the process that she calls deep reading. Deep reading isn’t how we approach looking for news or information, or trying to get the gist of something. It’s the “sophisticated comprehension processes,” as Wolf calls it, that those young architects and doctors were missing. “Reading is a bridge to thought,” she says. “And it’s that process that I think is the real endangered aspect of reading. In the young, what happens to the formation of the complete reading circuitry? Will it be short-circuited and have less time to develop the deep-reading processes? And in already developed readers like you and me, will those processes atrophy?”

Of course, as Wolf is quick to point out, there’s still no longitudinal data about digital reading. As she put it, “We’re in a place of apprehension rather than comprehension.” And it’s quite possible that the apprehension is misplaced: perhaps digital reading isn’t worse so much as different than print reading. Julie Coiro, who studies digital reading comprehension in elementary- and middle-school students at the University of Rhode Island, has found that good reading in print doesn’t necessarily translate to good reading on-screen. The students do not only differ in their abilities and preferences; they also need different sorts of training to excel at each medium. The online world, she argues, may require students to exercise much greater self-control than a physical book. “In reading on paper, you may have to monitor yourself once, to actually pick up the book,” she says. “On the Internet, that monitoring and self-regulation cycle happens again and again. And if you’re the kind of person who’s naturally good at self-monitoring, you don’t have a problem. But if you’re a reader who hasn’t been trained to pay attention, each time you click a link, you’re constructing your own text. And when you’re asked comprehension questions, it’s like you picked up the wrong book.”

Maybe the decline of deep reading isn’t due to reading skill atrophy but to the need to develop a very different sort of skill, that of teaching yourself to focus your attention. (Interestingly, Cairo found that gamers were often better online readers: they were more comfortable in the medium and better able to stay on task.) In a study comparing digital and print comprehension of a short nonfiction text, Rakefet Ackerman and Morris Goldsmith found that students fared equally well on a post-reading multiple-choice test when they were given a fixed amount of time to read, but that their digital performance plummeted when they had to regulate their time themselves. The digital deficit, they suggest, isn’t a result of the medium as such but rather of a failure of self-knowledge and self-control: we don’t realize that digital comprehension may take just as much time as reading a book.

Last year, Patricia Greenfield, a psychologist at the University of California, Los Angeles, and her colleagues found that multitasking while reading on a computer or a tablet slowed readers down, but their comprehension remained unaffected. What did suffer was the quality of a subsequent report that they wrote to synthesize their reading: if they read the original texts on paper or a computer with no Internet access, their end product was superior to that of their Internet-enabled counterparts. If the online readers took notes on paper, however, the negative effects of Internet access were significantly reduced. It wasn’t the screen that disrupted the fuller synthesis of deep reading; it was the allure of multitasking on the Internet and a failure to properly mitigate its impact.

Indeed, some data suggest that, in certain environments and on certain types of tasks, we can read equally well in any format. As far back as 1988, the University College of Swansea psychologists David Oborne and Doreen Holton compared text comprehension for reading on different screens and paper formats (dark characters on a light background, or light characters on a dark background), and found no differences in speed and comprehension between the four conditions. Their subjects, of course, didn’t have the Internet to distract them. In 2011, Annette Taylor, a psychologist at the University of San Diego, similarly found that students performed equally well on a twenty-question multiple-choice comprehension test whether they had read a chapter on-screen or on paper. Given a second test one week later, the two groups’ performances were still indistinguishable. And it’s not just reading. Last year, Sigal Eden and Yoram Eshet-Alkalai found no difference in accuracy between students who edited a six-hundred-word paper on the screen and those who worked on paper. Those who edited on-screen did so faster, but their performance didn’t suffer.

We need to be aware of the effects of deeper digital immersion, Wolf says, but we should be equally cautious when we draw causal arrows or place blame without adequate longitudinal research. “I’m both the Cassandra and the advocate of digital reading,” she says. Maybe her letter writers’ students weren’t victims of digitization so much as victims of insufficient training—and insufficient care—in the tools of managing a shifting landscape of reading and thinking. Deep-reading skills, Wolf points out, may not be emphasized in schools that conform to the Common Core, for instance, and need to meet certain test-taking reading targets that emphasize gist at the expense of depth. “Physical, tangible books give children a lot of time,” she says. “And the digital milieu speeds everything up. So we need to do things much more slowly and gradually than we are.” Not only should digital reading be introduced more slowly into the curriculum; it also should be integrated with the more immersive reading skills that deeper comprehension requires.

Wolf is optimistic that we can learn to navigate online reading just as deeply as we once did print—if we go about it with the necessary thoughtfulness. In a new study, the introduction of an interactive annotation component helped improve comprehension and reading strategy use in a group of fifth graders. It turns out that they could read deeply. They just had to be taught how. Wolf is now working on digital apps to train students in the tools of deep reading, to use the digital world to teach the sorts of skills we tend to associate with quiet contemplation and physical volumes. “The same plasticity that allows us to form a reading circuit to begin with, and short-circuit the development of deep reading if we allow it, also allows us to learn how to duplicate deep reading in a new environment,” she says. “We cannot go backwards. As children move more toward an immersion in digital media, we have to figure out ways to read deeply there.”

Wolf has decided that, despite all of her training in deep reading, she, too, needs some outside help. To finish her book, she has ensconced herself in a small village in France with shaky mobile reception and shakier Internet. Faced with the endless distraction of the digital world, she has chosen to tune out just a bit of it. She’s not going backward; she’s merely adapting.

Link: "Useless Knowledge" by Bertrand Russell

Francis Bacon, a man who rose to eminence by betraying his friends, asserted, no doubt as one of the ripe lessons of experience, that ‘knowledge is power’. But this is not true of all knowledge. Sir Thomas Browne wished to know what song the sirens sang, but if he had ascertained this it would not have enabled him to rise from being a magistrate to being High Sheriff of his county. The sort of knowledge that Bacon had in mind was that which we call scientific. In emphasising the importance of science, he was belatedly carrying on the tradition of the Arabs and the early Middle Ages, according to which knowledge consisted mainly of astrology, alchemy, and pharmacology, all of which were branches of science. A learned man was one who, having mastered these studies, had acquired magical powers. In the early eleventh century, Pope Silvester II, for no reason except that he read books, was universally believed to be a magician in league with the devil. Prospero, who in Shakespeare’s time was a mere phantasy, represented what had been for centuries the generally received conception of a learned man, so far at least as his powers of sorcery were concerned. Bacon believed – rightly, as we now know – that science could provide a more powerful magician’s wand than any that had been dreamed of by the necromancers of former ages.

The renaissance, which was at its height in England at the time of Bacon, involved a revolt against the utilitarian conception of knowledge. The Greeks had acquired a familiarity with Homer, as we do with music hall songs, because they enjoyed him, and without feeling that they were engaged in the pursuit of learning. But the men of the sixteenth century could not begin to understand him without first absorbing a very considerable amount of linguistic erudition. They admired the Greeks, and did not wish to be shut out from their pleasures; they therefore copied them, both in reading the classics and in other less avowable ways. Learning, in the renaissance, was part of the joie de vivre, just as much as drinking or love-making. And this was true not only of literature, but also of sterner studies. Everyone knows the story of Hobbes’s first contact with Euclid: opening the book, by chance, at the theorem of Pythagoras, he exclaimed, ‘By God, this is impossible’, and proceeded to read the proofs backwards until, reaching the axioms, he became convinced. No one can doubt that this was for him a voluptuous moment, unsullied by the thought of the utility of geometry in measuring fields.

It is true that the renaissance found a practical use for the ancient languages in connection witht heology. One of the earliest results of the new feeling for classical Latin was the discrediting of the forged decretals and the donation of Constantine. The inaccuracies which were discovered in the Vulgate and the Septuagint made Greek and Hebrew a necessary part of the controversial equipment of Protestant divines. The republican maxims of Greece and Rome we’re invoked to justify the resistance of Puritans to the Stuarts and of Jesuits to monarchs who had thrown off allegiance to the Pope. But all this was an effect, rather than a cause, of the revival of classical learning which had been in full swing in Italy for nearly a century before Luther.T he main motive for the renaissance was mental delight, the restoration of a certain richness and freedom in art and speculation which had been lost while ignorance and superstition kept the mind’s eye in blinkers.

The Greeks, it was found, had devoted a part of their attention to matters not purely literary or artistic, such as philosophy, geometry, and astronomy. These studies, therefore, were respectable, but other sciences were more open to question. Medicine, it was true, was dignified by the names of Hippocrates and Galen; but in the intervening period it had become almost confined to Arabs and Jews, and inextricably intertwined with magic. Hence the dubious reputation of such men as Paracelsus. Chemistry was in even worse odour, and hardly became respectable until the eighteenth century.

In this way it was brought about that knowledge of Greek and Latin, with a smattering of geometry and perhaps astronomy, came to be considered the intellectual equipment of a gentleman. The Greeks disdained the practical applications of geometry, and it was only in their decadence that they found a use for astronomy in the guise of astrology. The sixteenth and seventeenth centuries,in the main, studied mathematics with Hellenic disinterestedness, and tended to ignore the sciences which had been degraded by their connection with sorcery. A gradual change towards a wider and more practical conception of knowledge, which was going on throughout the eighteenth century, was suddenly accelerated at the end of that period by the French Revolution and the growth of machinery, of which the former gave a blow to gentlemanly culture while the latter offered new and astonishing scope for the exercise of ungentlemanly skill. Throughout the last hundred and fifty years, men have questioned more and more vigorously the value of ‘useless’ knowledge, and have come increasingly to believe that the only knowledge worth having is that which is applicable to some part of the economic life of the community.

In countries such as France and England, which have a traditional educational system, theu tilitarian view of knowledge has only partially prevailed. There are still, for example, professors of Chinese in the universities who read the Chinese classics but are unacquainted with the works of Sun Yat-sen, which created modern China. There are still men who know ancient history insofar as it was related by authors whose style was pure, that is to say up to Alexander in Greece and Nero in Rome, but refuse to know the much more important later history because of the literary inferiority of the historians who related it. Even in France and England, however, the old tradition is dying, and in more up to date countries, such as Russia and the United States, it is utterly extinct. In America, for example, educational commissions point out that fifteen hundred words are all that most people employ in business correspondence, and therefore suggest that all others should be avoided in the school curriculum. Basic English, a British invention, goes still further, and reduces the necessary vocabulary to eight hundred words. The conception of speech as something capable of aesthetic value is dying out, and it is coming to be thought that the sole purpose of words is to convey practical information. In Russia the pursuit of practical aims is even more whole-hearted than in America: all that is taught in educational institutions is intended to serve some obvious purpose in education or government. The only escape is afforded by theology: the sacred scriptures must be studied by some in the original German, and a few professors must learn philosophy in order to defend dialectical materialism against the criticism of , bourgeois metaphysicians. But as orthodoxy becomes more firmly established, even this tiny loophole will be closed.

Knowledge, everywhere, is coming to be regarded not as a good in itself, or as a means of creating a broad and humane outlook on life in general, but as merely an ingredient in technical skill. This is part of the greater integration of society which has been brought about by scientific technique and military necessity. There is more economic and political interdependence than there was informer times, and therefore there is more social pressure to compel a man to live in a way that his neighbours think useful. Educational establishments, except those for the very rich, or (in England) such as have become invulnerable through antiquity, are not allowed to spend their money as they like, but must satisfy the State that they are serving a useful purpose by imparting skill and instilling loyalty. This is part and parcel of the same movement which has led to compulsory military service, boy scouts, the organisation of political parties, and the dissemination of political passion by the Press. We are all more aware of our fellow-citizens than we used to be, more anxious, if we are virtuous, to do them good, and in any case to make them do us good.We do not like to think of anyone lazily enjoying life, however refined may be the quality of his enjoyment. We feel that everybody ought to be doing something to help on the great cause(whatever it may be), the more so as so many bad men are working against it and ought to be stopped. We have not leisure of mind, therefore, to acquire any knowledge except such as will helpus in the fight for whatever it may happen to be that we think important.

There is much to be said for the narrowly utilitarian view of education. There is not time to learn everything before beginning to make a living, and undoubtedly ‘useful’ knowledge is very useful.It has made the modern world. Without it, we should not have machines or motor-cars or railways or aeroplanes; it should be added that we should not have modern advertising or modern propaganda. Modem knowledge has brought about an immense improvement in average health,and at the same time has discovered how to exterminate large cities by poison gas. Whatever is distinctive of our world as compared with former times, has its source in useful’ knowledge. Nocommunity as yet has enough of it, and undoubtedly education must continue to promote it.

It must also be admitted that a great deal of the traditional cultural education was foolish. Boys spent many years acquiring Latin and Greek grammar, without being, at the end, either capable or desirous (except in a small percentage of cases) of reading a Greek or Latin author. Modern languages and history are preferable, from every point of view, to Latin and Greek. They are not only more useful, but they give much more culture in much less time. For an Italian of the fifteenth century, since practically everything worth reading, if not in his own language, was in Greek or Latin, these languages were the indispensable keys to culture. But since that time great literatures have grown up in various modern languages, and the development of civilisation has been so rapid that knowledge of antiquity has become much less useful in understanding our problems than knowledge of modern nations and their comparatively recent history. The traditional schoolmaster’s point of view, which was admirable at the time of the revival of learning, became gradually unduly narrow, since it ignored what the world has done since the fifteenth century. And not only history and modern languages, but science also, when properly taught, contributes to culture. It is therefore possible to maintain that education should have other aims than direct utility, without defending the traditional curriculum. Utility and culture,when both are conceived broadly, are found to be less incompatible than they appear to the fanatical advocates of either.

Apart, however, from the cases in which culture and direct utility can be combined, there is in direct utility, of various different kinds, in the possession of knowledge which does not contribute to technical efficiency. I think some of the worst features of the modern world could be improved by a greater encouragement of such knowledge and a less ruthless pursuit of mere professional competence.

When conscious activity is wholly concentrated on some one definite purpose, the ultimate result, for most people, is lack of balance accompanied by some form of nervous disorder.The men who directed German policy during the war made mistakes, for example, as regards the submarine campaign which brought America on to the side of the Allies, which any person coming fresh to the subject could have seen to be unwise, but which they could not judge sanely owing to mental concentration and lack of holidays. The same sort of thing may be seen wherever bodies of men attempt tasks which put a prolonged strain upon spontaneous impulses. Japanese imperialists, Russian Communists, and German Nazis all have a kind of tense fanaticism which comes of living too exclusively in the mental world of certain tasks to be accomplished. When the tasks are as important and as feasible as the fanatics suppose, the result may be magnificent; but in most cases narrowness of outlook has caused oblivion of some powerful counteracting force, or has made all such forces seem the work of the devil, to be met by punishment and terror. Men as well as children have need of play, that is to say, of periods of activity having no purpose beyond present enjoyment. But if play is to serve its purpose, it must be possible to find pleasure and interest in matters not connected with work.

The amusements of modern urban populations tend more and more to be passive and collective,and to consist of inactive observation of the skilled activities of others. Undoubtedly such amusements are much better than none, but they are not as good as would be those of a population which had, through education,a wider range of intelligent interests not connected with work.Better economic organization, allowing mankind to benefit by the productivity of machines,should lead to a very great increase of leisure, and much leisure is apt to be tedious except to those who have considerable intelligent activities and interests. If a leisured population is to be happy, it must be an educated population, and must be educated with a view to mental enjoyment as well to the direct usefulness of technical knowledge.

The cultural element in the acquisition of knowledge, when it is successfully assimilated, forms the character of a man’s thoughts and desires, making them concern themselves, in part at least, with large impersonal objects, not only with matters of immediate importance to himself. It has been too readily assumed that, when a man has acquired certain capacities by means of knowledge, he will use them in ways that are socially beneficial. The narrowly utilitarian conception of education ignores the necessity of training a man’s purposes ns well as his skill. There is in untrained human nature a very considerable element of cruelty, which shows itself in many ways, great and small.Boys at school tend to be unkind to a new boy, or to one whose clothes are not quite conventional.Many women (and not a few men) inflict as much pain as they can by means of malicious gossip. The Spaniards enjoy bull-fights; the British enjoy hunting and shooting. The same cruel impulses take more serious forms in the hunting of Jews in Germany and kulaks in Russia. All imperialism affords scope for them, and in war they become sanctified as the highest form of public duty.

Now it must be admitted that highly educated people are sometimes cruel, I think there can be no doubt that they are less often so than people whose minds have lain fallow. The bully in a school is seldom a boy whose proficiency in learning is up to the average. When a lynching takes place, the ringleaders are almost invariably very ignorant men. This is not because mental cultivation produces positive humanitarian feelings, though it may do so; it is rather because it gives other interests than the ill-treatment of neighbours, and other sources of self-respect than the assertion of domination. The two things most universally desired are power and admiration.Ignorant men can, as a rule, only achieve either by brutal means, involving the acquisition of physical mastery. Culture gives a man less harmful forms of power and more deserving ways of making himself admired. Galileo did more than any monarch has done to change the world, and his power immeasurably exceeded that of his persecutors. He had therefore no need to aim at becoming a persecutor in his turn. Perhaps the most important advantage of ‘useless’ knowledge is that it promotes a contemplative habit of mind. There is in the world too much readiness, not only for action without adequate previous reflection, but also for some sort of action on occasions on which wisdom would counsel inaction. People show their bias on this matter in various curious ways. Mephistopheles tells the young student that theory is grey but the tree of life is green, and everyone quotes this as if it were Goethe’s opinion, instead of what he supposes the devil would be likely to say to an undergraduate. Hamlet is held up as an awful warning against thought without action, but no one holds up Othello as a warning against action without thought. Professors such as Bergson, from a kind of snobbery towards the practical man, decry philosophy, and say that life at its best should resemble a cavalry charge. For my part, I think action is best when it emerges from a profound apprehension of the universe and human destiny, not from some wildly passionate impulse of romantic but disproportioned self-assertion. A habit of finding pleasure in thought rather than in action is a safeguard against unwisdom and excessive love of power, a means of preserving serenity in misfortune and peace of mind among worries. A life confined to what is personal is likely, sooner or later, to become unbearably painful;it is only by windows into a larger and less fretful cosmos that the more tragic parts of life become endurable.

A contemplative habit of mind has advantages ranging from the most trivial to the most profound. To begin with minor vexations, such as fleas, missing trains, or cantankerous business associates. Such troubles seem hardly worthy to be met by reflections on the excellence of heroism or the transitoriness of all human ills, and yet the irritation to which they give rise destroys many people’s good temper and enjoyment of life. On such occasions, there is much consolation to be found in out of the way bits of knowledge which have some real or fancied connection with the trouble of the moment; or even if they have none, they serve to obliterate the present fromone’s thoughts. When assailed by people who are white with fury, it is pleasant to remember the chapter in Descartes’ Treatise on the Passions entitled ‘Why those who grow pale with rage are more to be feared than those who grow red.’ When one feels impatient over the difficulty of securing international co-operation, one’s impatience is diminished if one happens to think of the sainted King Louis IX, before embarking on his crusade, allying himself with the Old Man of the Mountain, who appears in the Arabian Nights as the dark source of half the wickedness in the world. When the rapacity of capitalists grows oppressive, one may be suddenly consoled by the recollection that Brutus, that exemplar of republican virtue, lent money to acity at 40 per cent, and hired a private army to besiege it when it failed to pay the interest.

Curious learning not only makes unpleasant things less unpleasant, but also makes pleasant things more pleasant. I have enjoyed peaches and apricots more since I have known that they were first cultivated in China in the early days of the Han dynasty; that Chinese hostages held by the great King Kaniska introduced them to India, whence they spread to Persia, reaching the Roman Empire in the first century of our era; that the word ‘apricot’ is derived from the same Latin source as the word ‘precocious’, because the apricot ripens early; and that the A at the beginning was added by mistake, owing to a false etymology. All this makes the fruit taste much sweeter.

About a hundred years ago, a number of well-meaning philanthropists started societies `for the diffusion of useful knowledge’, with the result that people have ceased to appreciate the delicious savour of ‘useless’ knowledge. Opening Burton’s Anatomy of Melancholy at haphazard on a daywhen I was threatened by that mood, I learnt that there is a ‘melancholy matter’, but that, while some think it may be engendered of all four humours, ‘Galen holds that it may be engendered of three alone, excluding phlegm or pituita, whose true assertion Valerius and Menardus stiffly maintain, and so doth Fuscius, Montaltus, Montanus. How (say they) can white become black?’ In spite of this unanswerable argument, Hercules de Saxonia and Cardan, Guianerius and Laurentius, are (so Burton tells us) of the opposite opinion. Soothed by these historical reflections,my melancholy, whether due to three humours or to four, was dissipated. As a cure for too much zeal, I can imagine a few measures more effective thana course of such ancient controversies.

But while the trivial pleasures of culture have their place as a relief from the trivial worries of practical life, the more important merits of contemplation are in relation to the greater evils of life, death and pain and cruelty, and the blind march of nations in to unnecessary disaster. For those to whom dogmatic religion can no longer bring comfort, there is need of some substitute, if life is not to become dusty and harsh and filled with trivial self-assertion. The world at present is full of angry self-centred groups, each incapable of viewing human life as a whole, each willing to destroy civilization rather than yield an inch. To this narrowness no amount of technical instruction will provide an antidote. The antidote, in so far as it is a matter of individual psychology, is to be found in history, biology, astronomy, and all those studies which, without destroying self-respect, enable the individual to see himself in his proper perspective. What is needed is not this or that specific piece of information, but such knowledge as inspires a conception of the ends of human life as a whole: art and history, acquaintance with the lives of heroic individuals, and some understanding of the strangely accidental and ephemeral position of man in the cosmos — all this touched with an emotion of pride in what is distinctively human, the power to see and to know, to feel magnanimously and to think with understanding. It is from large perceptions combined with impersonal emotion that wisdom most readily springs.

Life, at all times full of pain, is more painful in our time than in the two centuries that preceded it. The attempt to escape from pain drives men to triviality, to self-deception, to the invention of vast collective myths. But these momentary alleviations do but increase the sources of suffering in the long run. Both private and public misfortune can only be mastered by a process in which will and intelligence interact: the part of will is to refuse to shirk the evil or accept an unreal solution,while the part of intelligence is to understand it, to find a cure if it is curable, and, if not, to make it bearable by seeing it in its relations, accepting it as unavoidable, and remembering what lies outside it in other regions, other ages, and the abysses of interstellar space.

Link: Imagining the Post-Antibiotics Future

After 85 years, antibiotics are growing impotent. So what will medicine, agriculture and everyday life look like if we lose these drugs entirely?

Predictions that we might sacrifice the antibiotic miracle have been around almost as long as the drugs themselves. Penicillin was first discovered in 1928 and battlefield casualties got the first non-experimental doses in 1943, quickly saving soldiers who had been close to death. But just two years later, the drug’s discoverer Sir Alexander Fleming warned that its benefit might not last. Accepting the 1945 Nobel Prize in Medicine, he said:

“It is not difficult to make microbes resistant to penicillin in the laboratory by exposing them to concentrations not sufficient to kill them… There is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drug make them resistant.”

As a biologist, Fleming knew that evolution was inevitable: sooner or later, bacteria would develop defenses against the compounds the nascent pharmaceutical industry was aiming at them. But what worried him was the possibility that misuse would speed the process up. Every inappropriate prescription and insufficient dose given in medicine would kill weak bacteria but let the strong survive. (As would the micro-dose “growth promoters” given in agriculture, which were invented a few years after Fleming spoke.) Bacteria can produce another generation in as little as twenty minutes; with tens of thousands of generations a year working out survival strategies, the organisms would soon overwhelm the potent new drugs.

Fleming’s prediction was correct. Penicillin-resistant staph emerged in 1940, while the drug was still being given to only a few patients. Tetracycline was introduced in 1950, and tetracycline-resistant Shigella emerged in 1959; erythromycin came on the market in 1953, and erythromycin-resistant strep appeared in 1968. As antibiotics became more affordable and their use increased, bacteria developed defenses more quickly. Methicillin arrived in 1960 and methicillin resistance in 1962; levofloxacin in 1996 and the first resistant cases the same year; linezolid in 2000 and resistance to it in 2001; daptomycin in 2003 and the first signs of resistance in 2004.With antibiotics losing usefulness so quickly — and thus not making back the estimated $1 billion per drug it costs to create them — the pharmaceutical industry lost enthusiasm for making more. In 2004, there were only five new antibiotics in development, compared to more than 500 chronic-disease drugs for which resistance is not an issue — and which, unlike antibiotics, are taken for years, not days. Since then, resistant bugs have grown more numerous and by sharing DNA with each other, have become even tougher to treat with the few drugs that remain. In 2009, and again this year, researchers in Europe and the United States sounded the alarm over an ominous form of resistance known as CRE, for which only one antibiotic still works.

Health authorities have struggled to convince the public that this is a crisis. In September, Dr. Thomas Frieden, the director of the U.S. Centers for Disease Control and Prevention, issued a blunt warning: “If we’re not careful, we will soon be in a post-antibiotic era. For some patients and some microbes, we are already there.” The chief medical officer of the United Kingdom, Dame Sally Davies — who calls antibiotic resistance as serious a threat as terrorism — recently published a book in which she imagines what might come next. She sketches a world where infection is so dangerous that anyone with even minor symptoms would be locked in confinement until they recover or die. It is a dark vision, meant to disturb. But it may actually underplay what the loss of antibiotics would mean.

In 2009, three New York physicians cared for a sixty-seven-year-old man who had major surgery and then picked up a hospital infection that was “pan-resistant” — that is, responsive to no antibiotics at all. He died fourteen days later. When his doctors related his case in a medical journal months afterward, they still sounded stunned. “It is a rarity for a physician in the developed world to have a patient die of an overwhelming infection for which there are no therapeutic options,” they said, calling the man’s death “the first instance in our clinical experience in which we had no effective treatment to offer.”

They are not the only doctors to endure that lack of options. Dr. Brad Spellberg of UCLA’s David Geffen School of Medicine became so enraged by the ineffectiveness of antibiotics that he wrote a book about it.

“Sitting with a family, trying to explain that you have nothing left to treat their dying relative — that leaves an indelible mark on you,” he says. “This is not cancer; it’s infectious disease, treatable for decades.”

As grim as they are, in-hospital deaths from resistant infections are easy to rationalize: perhaps these people were just old, already ill, different somehow from the rest of us. But deaths like this are changing medicine. To protect their own facilities, hospitals already flag incoming patients who might carry untreatable bacteria. Most of those patients come from nursing homes and “long-term acute care” (an intensive-care alternative where someone who needs a ventilator for weeks or months might stay). So many patients in those institutions carry highly resistant bacteria that hospital workers isolate them when they arrive, and fret about the danger they pose to others. As infections become yet more dangerous, the healthcare industry will be even less willing to take such risks.

Those calculations of risk extend far beyond admitting possibly contaminated patients from a nursing home. Without the protection offered by antibiotics, entire categories of medical practice would be rethought.

Many treatments require suppressing the immune system, to help destroy cancer or to keep a transplanted organ viable. That suppression makes people unusually vulnerable to infection. Antibiotics reduce the threat; without them, chemotherapy or radiation treatment would be as dangerous as the cancers they seek to cure. Dr. Michael Bell, who leads an infection-prevention division at the CDC, told me: “We deal with that risk now by loading people up with broad-spectrum antibiotics, sometimes for weeks at a stretch. But if you can’t do that, the decision to treat somebody takes on a different ethical tone. Similarly with transplantation. And severe burns are hugely susceptible to infection. Burn units would have a very, very difficult task keeping people alive.”

Doctors routinely perform procedures that carry an extraordinary infection risk unless antibiotics are used. Chief among them: any treatment that requires the construction of portals into the bloodstream and gives bacteria a direct route to the heart or brain. That rules out intensive-care medicine, with its ventilators, catheters, and ports—but also something as prosaic as kidney dialysis, which mechanically filters the blood.

Next to go: surgery, especially on sites that harbor large populations of bacteria such as the intestines and the urinary tract. Those bacteria are benign in their regular homes in the body, but introduce them into the blood, as surgery can, and infections are practically guaranteed. And then implantable devices, because bacteria can form sticky films of infection on the devices’ surfaces that can be broken down only by antibiotics

Dr. Donald Fry, a member of the American College of Surgeons who finished medical school in 1972, says: “In my professional life, it has been breathtaking to watch what can be done with synthetic prosthetic materials: joints, vessels, heart valves. But in these operations, infection is a catastrophe.” British health economists with similar concerns recently calculated the costs of antibiotic resistance. To examine how it would affect surgery, they picked hip replacements, a common procedure in once-athletic Baby Boomers. They estimated that without antibiotics, one out of every six recipients of new hip joints would die.

Antibiotics are administered prophylactically before operations as major as open-heart surgery and as routine as Caesarean sections and prostate biopsies. Without the drugs, the risks posed by those operations, and the likelihood that physicians would perform them, will change.

“In our current malpractice environment, is a doctor going to want to do a bone marrow transplant, knowing there’s a very high rate of infection that you won’t be able to treat?” asks Dr. Louis Rice, chair of the department of medicine at Brown University’s medical school. “Plus, right now healthcare is a reasonably free-market, fee-for-service system; people are interested in doing procedures because they make money. But five or ten years from now, we’ll probably be in an environment where we get a flat sum of money to take care of patients. And we may decide that some of these procedures aren’t worth the risk.”

Link: About the Intelligence of Bees

The other day, Ken and I had coffee with a couple of philosophers who spend their time thinking about philosophy of the mind.  What is consciousness?  Do non-human organisms have consciousness?  What is intelligence?  How do we make decisions?  What about ants?  These are hard questions to answer, perhaps even unanswerable, but they are fascinating to think about.

Our meeting was occasioned by the recent paper in PNAS about the mental map of bees (“Way-finding in displaced clock-shifted bees proves bees use a cognitive map”, Cheeseman et al.).  Cognitive maps are mental representations of physical places, which mammals use to navigate their surroundings.  Insects clearly have ways to do the same; whether or not they do it with cognitive maps is the question.

The “computational theory of mind” is the predominant theory of how mammals think — the brain is posited to be an information processing system, and thinking is the brain computing, or processing information (though, whether this is ‘truth’ or primarily a reflection of the computer age isn’t clear, at least to us).  In vertebrates some at least of this takes place in the section called the hippocampus, or in non-vertebrates in some neurological  homologs.  But, what do insects do?  

Previous work has shown that captured insects, once released, often fly off in the compass direction in which they were headed when they were caught, even if they were moved during capture and the direction is no longer appropriate.  But, they then can correct themselves, and then have no problem locating their hives. That indicates that they’ve got some kind of an “integrated metric map” of their environment.

Some theories have held that they mark the location of the sun relative to the direction they take and then later calculate ‘home’ based on a computation of time and the motion of the sun.  This by itself would be a lot of sophisticated computing, or thinking….and why not ‘intelligence’?

Cheeseman et al. asked whether instead what they are relying on is a series of snapshots of their environment, which enables them to recognize different landmarks, one after the other as they come into view, rather than a completely integrated mental map.  They experimented with anesthetizing bees and shifting their sense of time, so that they couldn’t rely on the sun to get them home.  It took some flying for the bees to recognize that they were off-course, but they always were able to re-orient themselves and get back to the hive.

Cheeseman et al. conclude that that because bees don’t rely entirely on a sun-compass for their sense of direction, they must have the apian equivalent of a cognitive map.  That is, they collect relevant spatial information from the environment with which they navigate, and use it to make decisions about how to get where they are going. That is, they take and file away snapshots; remember that insect eyes are complex, including two compound eyes and in most species three forehead-located small, simpler ocelli so this is synthesizing a many-camera pixellation and differently sensitive integration of the light-world. Then, they use a sequence of these frames, later, from a different position from that at which the photos were taken so not all landmarks might even be visible, and at a different time, which can affect shadows, colors, and so on.  Then, tiling these lined up in reverse order in mirror left-right flipped order somehow, and adjusting their angles of perspective and so on, also perhaps sound, wind direction, and even perhaps monitoring the olfactory trail (also in reverse relative position) like Hansel and Gretel’s bread crumbs, they head home for dinner.

To us, this is a remarkable feat for their small brains!  For some of us, even with a human brain, finding one’s way home without a GPS is no easy task, and deserves a nice cold drink when done successfully.  However, the philosophers we were chatting about this with did not think what Cheeseman et al. believe they discovered about bees should be called a cognitive map because, and we think we’ve got this right, they haven’t got a mental image of the entire lay of the land.  Instead it’s as though they are connecting the dots; they recognize landmarks and go from one mental snapshot with a familiar landmark to the next. So what kind of ‘intelligence’ this is becomes a definitional question perhaps.  Call it mechanical or whatever you want, we would call this ‘intelligent’ behavior.

We don’t know enough about philosophy (or the biology) of the mind to know how significantly these two models differ, or whether ‘consciousness’ is subtly underlying how these judgments about cognition are made, but in any case, that’s not what interested us about the bee story.  What is the experience of being a bee?  Whichever kind of imaging and processing they do to navigate, how do they turn the locational information into action?  It’s one thing to know that your hive is east (or the apian equivalent) of the pine tree, but getting there requires “knowing” that after you’ve collected the nectar, you then want to bring it home, and that means you have to find your way there.  Your mental map, whatever it consists of, must be made operational.  How does thathappen, in a brain the size of a bee’s? Or an ant’s?

Or bird brains?  Crows, corvids, are considered among the smartest of birds.  Their problem solving skills have been documented by a number of researchers, but crows have fascinated many non-scientists as well, including our son, who sent this observation from Lake Thun in Switzerland.

Crow found a little paper cup with some dried out dregs of leftover ketchup in the bottom. This is the sort of little paper condiment cup that would come with some french fries. We watched the crow try a couple of times to scrape some ketchup out with his beak, holding the cup down with his foot. It apparently wasn’t working enough to his satisfaction, so he flew with the cup to the edge of the water (we were at the lake). He wanted to get the ketchup wet to “hydrate” it, to make it easier to scoop out. That was impressive enough, but what he did next was even more. There were little waves lapping on the “shore” (this was actually in a harbour and the shore was concrete) and each time threatening to carry away his cup. So he picked up the cup and carried it along up and down the shore until he found a little crevasse in the concrete that he could secure the cup, and let the water wash over it without taking it away. Clever.

If that’s not intelligence, it’s hard to know what it is, then.

One view of intelligence is that it’s what’s measured by IQ tests.  Or, at least, what humans think ‘thinking’ is all about.  But this is perhaps a very parochial view.  We tend to dismiss the kind of intricate brainwork that is required by nonverbal activities, or by athletes, or artists, or artisans.  We tend to equate intelligence with verbal kinds of skills measured on tests devised by the literate segments of society who are using the results to screen for various kinds of western-culture activities, suitability for school, and the like. There’s no reason to suggest that those aspects of brainware are not relevant to society, but it is our culturally chosen sort of definition.

Philosophers and perhaps most psychologists might not want to credit the crow with ‘intelligence’, or they may use the word but exclude concepts of perceptual consciousness—though whether there are adequate grounds for that that are not entirely based on our own experience as the defining one, isn’t clear (to us, at least).  In any case, wiring and behavior are empirically observable, but experience much less so, and consciousness as a component of brain activity, and perhaps of intelligence, remains elusive because it’s a subjective experience while science is a method for exploring the empirical, and in that sense objective world.

If bees and, indeed, very tiny insects can navigate around searching the environment, having ideas about ‘home’, finding mates, recognizing food and dangers, and they can do it with thousands rather than billions of neurons, at present we haven’t enough understanding of what ‘thinking’ is, much less ‘intelligence’, to know what goes through a bee’s or a crow’s mind when they’re exploring their world….

Link: Genetics and Homosexuality

Sexual preference is one of the most strongly genetically determined behavioural traits we know of. A single genetic element is responsible for most of the variation in this trait across the population. Nearly all (>95%) of the people who inherit this element are sexually attracted to females, while about the same proportion of people who do not inherit it are attracted to males. This attraction is innate, refractory to change and affects behaviour in stereotyped ways, shaped and constrained by cultural context. It is the commonest and strongest genetic effect on behaviour that we know of in humans (in all mammals, actually). The genetic element is of course the Y chromosome.

The idea that sexual behaviour can be affected by – even largely determined by – our genes is therefore not only not outlandish, it is trivially obvious. Yet claims that differences in sexual orientationmay have at least a partly genetic basis seem to provoke howls of scepticism and outrage from many, mostly based not on scientific arguments but political ones.

The term sexual orientation refers to whether your sexual preference matches the typical preference based on whether or not you have a Y chromosome. It is important to realise that it therefore refers to four different states, not two: (i) has Y chromosome, is attracted to females; (ii) has Y chromosome, is attracted to males; (iii) does not have Y chromosome, is attracted to males; (iv) does not have Y chromosome, is attracted to females. We call two of these states heterosexual and two of them homosexual. (This ignores the many inpiduals whose sexual preferences are not so exclusive or rigid).

A recent twin study confirms that sexual orientation is moderately heritable – that is, that variation in genes contributes to variation in this trait. These effects are detected by looking at pairs of twins and determining how often, when one of them is homosexual, the other one is too. This rate is much higher (30-50%) in monozygotic, or identical, twins (who share all of their DNA sequence), than in dizygotic, or fraternal, twins (who share only half of their DNA), where the rate is 10-20%. If we assume that the environments of pairs of mono- or dizygotic twins are equally similar, then we can infer that the increased similarity in sexual orientation in pairs of monozygotic twins is due to their increased genetic similarity.

These data are not yet published (or peer reviewed) but were presented by Dr. Michael Bailey at the recent American Association for the Advancement of Science meeting (Feb 12th 2014) and widely reported on. They confirm and extend findings from multiple previous twin studies across several different countries, which have all found fairly similar results (see here for more details). Overall, the conclusion that sexual orientation is partly heritable was already firmly made. 

The reaction to news of this recent study reveals a deep disquiet with the idea that homosexuality may arise due to genetic differences. First, there are those who scoff at the idea that such a complex behaviour could be determined by what may be only a small number of genetic differences – perhaps only one. As I recently discussed, this view is based on a fundamental misunderstanding of what genetic findings really mean. Finding that a trait (a difference in some system) can be affected by a single genetic difference does not mean a single gene is responsible for crafting the entire system – it simply means that the system does not work normally in the absence of that gene. (Just as a car does not work well without its steering wheel).

Others have expressed a variety of personal and political reactions to these findings, ranging from welcoming further evidence of a biological basis for sexual orientation to worry that it will be used to label homosexuality a genetic disorder and even to enable selective abortion based on genetic prediction. The latter possibility may be made more technically feasible by the other aspect of the recently reported study, which was the claim that they have mapped genetic variants affecting sexual orientation to two specific regions of thegenome. (This doesn’t mean they have identified specific genetic variants but may be a step towards doing so).

Let’s explore what the data in this case really show and really mean. A variety of conclusions can be drawn from this and previous studies:

1. Differences in sexual orientation are partly attributable to genetic differences.

2. Sexual orientation in males and females is controlled by distinct sets of genes. (Dizygotic twins of opposite sex show no increased similarity in sexual orientation compared to unrelated people – if a female twin is gay, there is no increased likelihood that her twin brother will be too, and vice versa).

3. Male sexual orientation is rather more strongly heritable than female.

4. The shared family environment has no effect on male sexual orientation but may have a small effect on female sexual orientation.

5. There must also be non-genetic factors influencing this trait, as monozygotic twins are still often discordant (more often than concordant, in fact).

The fact that sexual orientation in males and females is influenced by distinct sets of genetic variants is interesting and leads to a fundamental insight: heterosexuality is not a single default state. It emerges from distinct biological processes that actively match the brain circuitry of (i) males or (ii) females to their chromosomal andgonadal sex so that most inpiduals who carry a Y chromosome are attracted to females and most people who do not are attracted to males.

What is being regulated, biologically, is not sexual orientation (whether you are attracted to people of the same or opposite sex), but sexual preference (whether you are attracted to males or females). Given how complex the processes of sexual differentiation of the brain are (involving the actions of many different genes), it is not surprising that they can sometimes be impaired due to variation in those genes, leading to a failure to match sexual preference to chromosomal sex. Indeed, we know of many specific mutations that can lead to exactly such effects in other mammals – it would be surprising if similar events did not occur in humans.

These studies are consistent with the idea that sexual preference is a biological trait – an innate characteristic of an inpidual, not strongly affected by experience or family upbringing. Not a choice, in other words. We didn’t need genetics to tell us that – personal experience does just fine for most people. But this kind of evidence becomes important when some places in the world (like Uganda, recently) appeal to science to claim (wrongly) that there is evidence that homosexuality is an active choice and use that claim directly to justify criminalisation of homosexual behaviour.

Importantly, the fact that sexual orientation is only partly heritable does not at all undermine the conclusion that it is a completely biological trait. Just because monozygotic twins are not always concordant for sexual orientation does not mean the trait is not completely innate. Typically, geneticists use the term “non-shared environmental variance” to refer to factors that influence a trait outside of shared genes or shared family environment. The non-shared environment term encompasses those effects that explain why monozygotic twins are actually less than identical for many traits (reflecting additional factors that contribute to variance in the trait across the population generally).

The terminology is rather unfortunate because “environmental” does not have its normal colloquial meaning in this context. It does not necessarily mean that some experience that an inpidual has influences their phenotype. Firstly, it encompasses measurement error (just the difficulty in accurately measuring the trait, which is particularly important for behavioural traits). Secondly, it includes environmental effects prior to birth (in utero), which may be especially important for brain development. And finally, it also includes chance or noise – in this case, intrinsic developmental variation that can have dramatic effects on the end-state or outcome of brain development. This process is incredibly complex and noisy, in engineering terms, and the outcome is, like baking a cake, never the same twice. By the time they are born (when the buns come out of the oven), the brains of monozygotic twins are already highly unique.

Genetic differences may thus change the probability of an outcome over many instances, without determining the specific outcome in any inpidual. 

A useful analogy is to handedness. Handedness is only moderately heritable but is effectively completely innate or intrinsic to the inpidual. This is true even though the preference for using one hand over the other emerges only over time. The harsh experiences of many in the past who were forced (sometimes with deeply cruel and painful methods) to write with their right hands because left-handedness was seen as aberrant – even sinful – attest to the fact that the innate preference cannot readily be overridden. All the evidence suggests this is also the case for sexual preference.

What about concerns that these findings could be used as justification for labelling homosexuality a disorder? These are probably somewhat justified – no doubt some people will use it like that. And that places a responsibility on geneticists to explain that just because something is caused by genetic variants – i.e., mutations – does not mean it necessarily should be considered a disorder. We don’t consider red hair a disorder, or blue eyes, or pale skin, or – any longer – left-handedness. All of those are caused by mutations.

The word mutation is rather loaded, but in truth we are all mutants. Each of us carries hundreds of thousands of genetic variants, andhundreds of those are rare, serious mutations that affect the function of some protein. Many of those cause some kind of difference to our phenotype (the outward expression of our genotype). But a difference is only considered a disorder if it negatively impacts on someone’s life. And homosexuality is only a disorder if society makes it one.

Link: The Mental Life of Plants and Worms, Among Others

We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life. And yet, Darwin insisted, they were closer than one might think.

Charles Darwin’s last book, published in 1881, was a study of the humble earthworm. His main theme—expressed in the title, The Formation of Vegetable Mould through the Action of Worms—was the immense power of worms, in vast numbers and over millions of years, to till the soil and change the face of the earth. But his opening chapters are devoted more simply to the “habits” of worms.

Worms can distinguish between light and dark, and they generally stay underground, safe from predators, during daylight hours. They have no ears, but if they are deaf to aerial vibration, they are exceedingly sensitive to vibrations conducted through the earth, as might be generated by the footsteps of approaching animals. All of these sensations, Darwin noted, are transmitted to collections of nerve cells (he called them “the cerebral ganglia”) in the worm’s head.

“When a worm is suddenly illuminated,” Darwin wrote, it “dashes like a rabbit into its burrow.” He noted that he was “at first led to look at the action as a reflex one,” but then observed that this behavior could be modified—for instance, when a worm was otherwise engaged, it showed no withdrawal with sudden exposure to light.

For Darwin, the ability to modulate responses indicated “the presence of a mind of some kind.” He also wrote of the “mental qualities” of worms in relation to their plugging up their burrows, noting that “if worms are able to judge…having drawn an object close to the mouths of their burrows, how best to drag it in, they must acquire some notion of its general shape.” This moved him to argue that worms “deserve to be called intelligent, for they then act in nearly the same manner as a man under similar circumstances.”

As a boy, I played with the earthworms in our garden (and later used them in research projects), but my true love was for the seashore, and especially tidal pools, for we nearly always took our summer holidays at the seaside. This early, lyrical feeling for the beauty of simple sea creatures became more scientific under the influence of a biology teacher at school and our annual visits with him to the Marine Station at Millport in southwest Scotland, where we could investigate the immense range of invertebrate animals on the seashores of Cumbrae. I was so excited by these Millport visits that I thought I would like to become a marine biologist myself.

If Darwin’s book on earthworms was a favorite of mine, so too was George John Romanes’s 1885 book Jelly-Fish, Star-Fish, and Sea-Urchins: Being a Research on Primitive Nervous Systems, with its simple, fascinating experiments and beautiful illustrations. For Romanes, Darwin’s young friend and student, the seashore and its fauna were to be passionate and lifelong interests, and his aim above all was to investigate what he regarded as the behavioral manifestations of “mind” in these creatures.

I was charmed by Romanes’s personal style. (His studies of invertebrate minds and nervous systems were most happily pursued, he wrote, in “a laboratory set up upon the sea-beach…a neat little wooden workshop thrown open to the sea-breezes.”) But it was clear that correlating the neural and the behavioral was at the heart of Romanes’s enterprise. He spoke of his work as “comparative psychology,” and saw it as analogous to comparative anatomy.

Louis Agassiz had shown, as early as 1850, that the jellyfish Bougainvillea had a substantial nervous system, and by 1883 Romanes demonstrated its individual nerve cells (there are about a thousand). By simple experiments—cutting certain nerves, making incisions in the bell, or looking at isolated slices of tissue—he showed that jellyfish employed both autonomous, local mechanisms (dependent on nerve “nets”) and centrally coordinated activities through the circular “brain” that ran along the margins of the bell.

By 1883, Romanes was able to include drawings of individual nerve cells and clusters of nerve cells, or ganglia, in his book Mental Evolution in Animals. “Throughout the animal kingdom,” Romanes wrote,

nerve tissue is invariably present in all species whose zoological position is not below that of the Hydrozoa. The lowest animals in which it has hitherto been detected are the Medusae, or jelly-fishes, and from them upwards its occurrence is, as I have said, invariable. Wherever it does occur its fundamental structure is very much the same, so that whether we meet with nerve-tissue in a jelly-fish, an oyster, an insect, a bird, or a man, we have no difficulty in recognizing its structural units as everywhere more or less similar.

At the same time that Romanes was vivisecting jellyfish and starfish in his seaside laboratory, the young Sigmund Freud, already a passionate Darwinian, was working in the lab of Ernst Brücke, a physiologist in Vienna. His special concern was to compare the nerve cells of vertebrates and invertebrates, in particular those of a very primitive vertebrate (Petromyzon, a lamprey) with those of an invertebrate (a crayfish). While it was widely held at the time that the nerve elements in invertebrate nervous systems were radically different from those of vertebrate ones, Freud was able to show and illustrate, in meticulous, beautiful drawings, that the nerve cells in crayfish were basically similar to those of lampreys—or human beings.

And he grasped, as no one had before, that the nerve cell body and its processes—dendrites and axons—constituted the basic building blocks and the signaling units of the nervous system. Eric Kandel, in his book In Search of Memory: The Emergence of a New Science of Mind (2006), speculates that if Freud had stayed in basic research instead of going into medicine, perhaps he would be known today as “a co-founder of the neuron doctrine, instead of as the father of psychoanalysis.”

Although neurons may differ in shape and size, they are essentially the same from the most primitive animal life to the most advanced. It is their number and organization that differ: we have a hundred billion nerve cells, while a jellyfish has a thousand. But their status as cells capable of rapid and repetitive firingis essentially the same.

The crucial role of synapses—the junctions between neurons where nerve impulses can be modulated, giving organisms flexibility and a whole range of behaviors—was clarified only at the close of the nineteenth century by the great Spanish anatomist Santiago Ramón y Cajal, who looked at the nervous systems of many vertebrates and invertebrates, and by C.S. Sherrington in England (it was Sherrington who coined the word “synapse” and showed that synapses could be excitatory or inhibitory in function).

In the 1880s, however, despite Agassiz’s and Romanes’s work, there was still a general feeling that jellyfish were little more than passively floating masses of tentacles ready to sting and ingest whatever came their way, little more than a sort of floating marine sundew.

But jellyfish are hardly passive. They pulsate rhythmically, contracting every part of their bell simultaneously, and this requires a central pacemaker system that sets off each pulse. Jellyfish can change direction and depth, and many have a “fishing” behavior that involves turning upside down for a minute, spreading their tentacles like a net, and then righting themselves, which they do by virtue of eight gravity-sensing balance organs. (If these are removed, the jellyfish is disoriented and can no longer control its position in the water.) If bitten by a fish, or otherwise threatened, jellyfish have an escape strategy—a series of rapid, powerful pulsations of the bell—that shoots them out of harm’s way; special, oversized (and therefore rapidly responding) neurons are activated at such times.

Of special interest and infamous reputation among divers is the box jellyfish (Cubomedusae)—one of the most primitive animals to have fully developed image-forming eyes, not so different from our own. The biologist Tim Flannery, in an article in these pages, writes of box jellyfish:

They are active hunters of medium-sized fish and crustaceans, and can move at up to twenty-one feet per minute. They are also the only jellyfish with eyes that are quite sophisticated, containing retinas, corneas, and lenses. And they have brains, which are capable of learning, memory, and guiding complex behaviors.1

We and all higher animals are bilaterally symmetrical, have a front end (a head) containing a brain, and a preferred direction of movement (forward). The jellyfish nervous system, like the animal itself, is radially symmetrical and may seem less sophisticated than a mammalian brain, but it has every right to be considered a brain, generating, as it does, complex adaptive behaviors and coordinating all the animal’s sensory and motor mechanisms. Whether we can speak of a “mind” here (as Darwin does in regard to earthworms) depends on how one defines “mind.”

We all distinguish between plants and animals. We understand that plants, in general, are immobile, rooted in the ground; they spread their green leaves to the heavens and feed on sunlight and soil. We understand that animals, in contrast, are mobile, moving from place to place, foraging or hunting for food; they have easily recognized behaviors of various sorts. Plants and animals have evolved along two profoundly different paths (fungi have yet another), and they are wholly different in their forms and modes of life.

And yet, Darwin insisted, they were closer than one might think. He wrote a series of botanical books, culminating in The Power of Movement in Plants (1880), just before his book on earthworms. He thought the powers of movement, and especially of detecting and catching prey, in the insectivorous plants so remarkable that, in a letter to the botanist Asa Gray, he referred to Drosera, the sundew, only half-jokingly as not only a wonderful plant but “a most sagacious animal.”

Darwin was reinforced in this notion by the demonstration that insect-eating plants made use of electrical currents to move, just as animals did—that there was “plant electricity” as well as “animal electricity.” But “plant electricity” moves slowly, roughly an inch a second, as one can see by watching the leaflets of the sensitive plant (Mimosa pudica) closing one by one along a leaf that is touched. “Animal electricity,” conducted by nerves, moves roughly a thousand times faster.2

Signaling between cells depends on electrochemical changes, the flow of electrically charged atoms (ions), in and out of cells via special, highly selective molecular pores or “channels.” These ion flows cause electrical currents, impulses—action potentials—that are transmitted (directly or indirectly) from one cell to another, in both plants and animals.

Plants depend largely on calcium ion channels, which suit their relatively slow lives perfectly. As Daniel Chamovitz argues in his book What a Plant Knows (2012), plants are capable of registering what we would call sights, sounds, tactile signals, and much more. Plants know what to do, and they “remember.” But without neurons, plants do not learn in the same way that animals do; instead they rely on a vast arsenal of different chemicals and what Darwin termed “devices.” The blueprints for these must all be encoded in the plant’s genome, and indeed plant genomes are often larger than our own.

The calcium ion channels that plants rely on do not support rapid or repetitive signaling between cells; once a plant action potential is generated, it cannot be repeated at a fast enough rate to allow, for example, the speed with which a worm “dashes…into its burrow.” Speed requires ions and ion channels that can open and close in a matter of milliseconds, allowing hundreds of action potentials to be generated in a second. The magic ions, here, are sodium and potassium ions, which enabled the development of rapidly reacting muscle cells, nerve cells, and neuromodulation at synapses. These made possible organisms that could learn, profit by experience, judge, act, and finally think.

This new form of life—animal life—emerging perhaps 600 million years ago conferred great advantages, and transformed populations rapidly. In the so-called Cambrian explosion (datable with remarkable precision to 542 million years ago), a dozen or more new phyla, each with very different body plans, arose within the space of a million years or less—a geological eye-blink. The once peaceful pre-Cambrian seas were transformed into a jungle of hunters and hunted, newly mobile. And while some animals (like sponges) lost their nerve cells and regressed to a vegetative life, others, especially predators, evolved increasingly sophisticated sense organs, memories, and minds.

Link: Hell on Earth

At the University of Oxford, a team of scholars led by the philosopher Rebecca Roache has begun thinking about the ways futuristic technologies might transform punishment. In January, I spoke with Roache and her colleagues Anders Sandberg and Hannah Maslen about emotional enhancement, ‘supercrimes’, and the ethics of eternal damnation. What follows is a condensed and edited transcript of our conversation.

Suppose we develop the ability to radically expand the human lifespan, so that people are regularly living for more than 500 years. Would that allow judges to fit punishments to crimes more precisely?

Roache: When I began researching this topic, I was thinking a lot about Daniel Pelka, a four-year-old boy who was starved and beaten to death [in 2012] by his mother and stepfather here in the UK. I had wondered whether the best way to achieve justice in cases like that was to prolong death as long as possible. Some crimes are so bad they require a really long period of punishment, and a lot of people seem to get out of that punishment by dying. And so I thought, why not make prison sentences for particularly odious criminals worse by extending their lives?

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

The life-extension scenario may sound futuristic, but if you look closely you can already see it in action, as people begin to live longer lives than before. If you look at the enormous prison population in the US, you find an astronomical number of elderly prisoners, including quite a few with pacemakers. When I went digging around in medical journals, I found all these interesting papers about the treatment of pacemaker patients in prison.

Suppose prisons become more humane in the future, so that they resemble Norwegian prisons instead of those you see in America or North Korea. Is it possible that correctional facilities could become truly correctional in the age of long lifespans, by taking a more sustained approach to rehabilitation?

Roache: If people could live for centuries or millennia, you would obviously have more time to reform them, but you would also run into a tricky philosophical issue having to do with personal identity. A lot of philosophers who have written about personal identity wonder whether identity can be sustained over an extremely long lifespan. Even if your body makes it to 1,000 years, the thinking goes, that body is actually inhabited by a succession of persons over time rather than a single continuous person. And so, if you put someone in prison for a crime they committed at 40, they might, strictly speaking, be an entirely different person at 940. And that means you are effectively punishing one person for a crime committed by someone else. Most of us would think that unjust.

Let’s say that life expansion therapies become a normal part of the human condition, so that it’s not just elites who have access to them, it’s everyone. At what point would it become unethical to withhold these therapies from prisoners?

Roache: In that situation it would probably be inappropriate to view them as an enhancement, or something extra. If these therapies were truly universal, it’s more likely that people would come to think of them as life-saving technologies. And if you withheld them from prisoners in that scenario, you would effectively be denying them medical treatment, and today we consider that inhumane. My personal suspicion is that once life extension becomes more or less universal, people will begin to see it as a positive right, like health care in most industrialised nations today. Indeed, it’s interesting to note that in the US, prisoners sometimes receive better health care than uninsured people. You have to wonder about the incentives a system like that creates.

Where is that threshold of universality, where access to something becomes a positive right? Do we have an empirical example of it?

Roache: One interesting case might be internet access. In Finland, for instance, access to communication technology is considered a human right and handwritten letters are not sufficient to satisfy it. Finnish prisons are required to give inmates access to computers, although their internet activity is closely monitored. This is an interesting development because, for years, limiting access to computers was a common condition of probation in hacking cases – and that meant all kinds of computers, including ATMs [cash points]. In the 1980s, that lifestyle might have been possible, and you could also see pulling it off in the ’90s, though it would have been very difficult. But today computers are ubiquitous, and a normal life seems impossible without them; you can’t even access the subway without interacting with a computer of some sort.

In the late 1990s, an American hacker named Kevin Mitnick was denied all access to communication technology after law enforcement officials [in California] claimed he could ‘start a nuclear war by whistling into a pay phone’. But in the end, he got the ruling overturned by arguing that it prevented him from living a normal life.

What about life expansion that meddles with a person’s perception of time? Take someone convicted of a heinous crime, like the torture and murder of a child. Would it be unethical to tinker with the brain so that this person experiences a 1,000-year jail sentence in his or her mind?

Roache: There are a number of psychoactive drugs that distort people’s sense of time, so you could imagine developing a pill or a liquid that made someone feel like they were serving a 1,000-year sentence. Of course, there is a widely held view that any amount of tinkering with a person’s brain is unacceptably invasive. But you might not need to interfere with the brain directly. There is a long history of using the prison environment itself to affect prisoners’ subjective experience. During the Spanish Civil War [in the 1930s] there was actually a prison where modern art was used to make the environment aesthetically unpleasant. Also, prison cells themselves have been designed to make them more claustrophobic, and some prison beds are specifically made to be uncomfortable.

I haven’t found any specific cases of time dilation being used in prisons, but time distortion is a technique that is sometimes used in interrogation, where people are exposed to constant light, or unusual light fluctuations, so that they can’t tell what time of day it is. But in that case it’s not being used as a punishment, per se, it’s being used to break people’s sense of reality so that they become more dependent on the interrogator, and more pliable as a result. In that sense, a time-slowing pill would be a pretty radical innovation in the history of penal technology.

I want to ask you a question that has some crossover with theological debates about hell. Suppose we eventually learn to put off death indefinitely, and that we extend this treatment to prisoners. Is there any crime that would justify eternal imprisonment? Take Hitler as a test case. Say the Soviets had gotten to the bunker before he killed himself, and say capital punishment was out of the question – would we have put him behind bars forever?

Roache: It’s tough to say. If you start out with the premise that a punishment should be proportional to the crime, it’s difficult to think of a crime that could justify eternal imprisonment. You could imagine giving Hitler one term of life imprisonment for every person killed in the Second World War. That would make for quite a long sentence, but it would still be finite. The endangerment of mankind as a whole might qualify as a sufficiently serious crime to warrant it. As you know, a great deal of the research we do here at the Oxford Martin School concerns existential risk. Suppose there was some physics experiment that stood a decent chance of generating a black hole that could destroy the planet and all future generations. If someone deliberately set up an experiment like that, I could see that being the kind of supercrime that would justify an eternal sentence.

In your forthcoming paper on this subject, you mention the possibility that convicts with a neurologically stunted capacity for empathy might one day be ‘emotionally enhanced’, and that the remorse felt by these newly empathetic criminals could be the toughest form of punishment around. Do you think a full moral reckoning with an awful crime the most potent form of suffering an individual can endure?

Roache: I’m not sure. Obviously, it’s an empirical question as to which feels worse, genuine remorse or time in prison. There is certainly reason to take the claim seriously. For instance, in literature and folk wisdom, you often hear people saying things like, ‘The worst thing is I’ll have to live with myself.’ My own intuition is that for very serious crimes, genuine remorse could be subjectively worse than a prison sentence. But I doubt that’s the case for less serious crimes, where remorse isn’t even necessarily appropriate – like if you are wailing and beating yourself up for stealing a candy bar or something like that.

I remember watching a movie in school, about a teen that killed another teen in a drunk-driving accident. As one of the conditions of his probation, the judge in the case required him to mail a daily cheque for 25 cents to the parents of the teen he’d killed for a period of 10 years. Two years in, the teen was begging the judge to throw him in jail, just to avoid the daily reminder.

Roache: That’s an interesting case where prison is actually an escape from remorse, which is strange because one of the justifications for prison is that it’s supposed to focus your mind on what you have done wrong. Presumably, every day you wake up in prison, you ask yourself why you are there, right?

What if these emotional enhancements proved too effective? Suppose they are so powerful, they turn psychopaths into Zen masters who live in a constant state of deep, reflective contentment. Should that trouble us? Is mental suffering a necessary component of imprisonment?

Roache: There is a long-standing philosophical question as to how bad the prison experience should be. Retributivists, those who think the point of prisons is to punish, tend to think that it should be quite unpleasant, whereas consequentialists tend to be more concerned with a prison’s reformative effects, and its larger social costs. There are a number of prisons that offer prisoners constructive activities to participate in, including sports leagues, art classes, and even yoga. That practice seems to reflect the view that confinement, or the deprivation of liberty, is itself enough of a punishment. Of course, even for consequentialists, there has to be some level of suffering involved in punishment, because consequentialists are very concerned about deterrence.

I wanted to close by moving beyond imprisonment, to ask you about the future of punishment more broadly. Are there any alternative punishments that technology might enable, and that you can see on the horizon now? What surprising things might we see down the line?

Roache: We have been thinking a lot about surveillance and punishment lately. Already, we see governments using ankle bracelets to track people in various ways, and many of them are fairly elaborate. For instance, some of these devices allow you to commute to work, but they also give you a curfew and keep a close eye on your location. You can imagine this being refined further, so that your ankle bracelet bans you from entering establishments that sell alcohol. This could be used to punish people who happen to like going to pubs, or it could be used to reform severe alcoholics. Either way, technologies of this sort seem to be edging up to a level of behaviour control that makes some people uneasy, due to questions about personal autonomy.

It’s one thing to lose your personal liberty as a result of being confined in a prison, but you are still allowed to believe whatever you want while you are in there. In the UK, for instance, you cannot withhold religious manuscripts from a prisoner unless you have a very good reason. These concerns about autonomy become particularly potent when you start talking about brain implants that could potentially control behaviour directly. The classic example is Robert G Heath [a psychiatrist at Tulane University in New Orleans], who did this famously creepy experiment [in the 1950s] using electrodes in the brain in an attempt to modify behaviour in people who were prone to violent psychosis. The electrodes were ostensibly being used to treat the patients, but he was also, rather gleefully, trying to move them in a socially approved direction. You can really see that in his infamous [1972] paper on ‘curing’ homosexuals. I think most Western societies would say ‘no thanks’ to that kind of punishment.

To me, these questions about technology are interesting because they force us to rethink the truisms we currently hold about punishment. When we ask ourselves whether it’s inhumane to inflict a certain technology on someone, we have to make sure it’s not just the unfamiliarity that spooks us. And more importantly, we have to ask ourselves whether punishments like imprisonment are only considered humane because they are familiar, because we’ve all grown up in a world where imprisonment is what happens to people who commit crimes. Is it really OK to lock someone up for the best part of the only life they will ever have, or might it be more humane to tinker with their brains and set them free? When we ask that question, the goal isn’t simply to imagine a bunch of futuristic punishments – the goal is to look at today’s punishments through the lens of the future.

Link: David Graeber: What’s the Point If We Can’t Have Fun?

My friend June Thunderstorm and I once spent a half an hour sitting in a meadow by a mountain lake, watching an inchworm dangle from the top of a stalk of grass, twist about in every possible direction, and then leap to the next stalk and do the same thing. And so it proceeded, in a vast circle, with what must have been a vast expenditure of energy, for what seemed like absolutely no reason at all.

“All animals play,” June had once said to me. “Even ants.” She’d spent many years working as a professional gardener and had plenty of incidents like this to observe and ponder. “Look,” she said, with an air of modest triumph. “See what I mean?”

Most of us, hearing this story, would insist on proof. How do we know the worm was playing? Perhaps the invisible circles it traced in the air were really just a search for some unknown sort of prey. Or a mating ritual. Can we prove they weren’t? Even if the worm was playing, how do we know this form of play did not serve some ultimately practical purpose: exercise, or self-training for some possible future inchworm emergency?

This would be the reaction of most professional ethologists as well. Generally speaking, an analysis of animal behavior is not considered scientific unless the animal is assumed, at least tacitly, to be operating according to the same means/end calculations that one would apply to economic transactions. Under this assumption, an expenditure of energy must be directed toward some goal, whether it be obtaining food, securing territory, achieving dominance, or maximizing reproductive success—unless one can absolutely prove that it isn’t, and absolute proof in such matters is, as one might imagine, very hard to come by.

I must emphasize here that it doesn’t really matter what sort of theory of animal motivation a scientist might entertain: what she believes an animal to be thinking, whether she thinks an animal can be said to be “thinking” anything at all. I’m not saying that ethologists actually believe that animals are simply rational calculating machines. I’m simply saying that ethologists have boxed themselves into a world where to be scientific means to offer an explanation of behavior in rational terms—which in turn means describing an animal as if it were a calculating economic actor trying to maximize some sort of self-interest—whatever their theory of animal psychology, or motivation, might be.

That’s why the existence of animal play is considered something of an intellectual scandal. It’s understudied, and those who do study it are seen as mildly eccentric. As with many vaguely threatening, speculative notions, difficult-to-satisfy criteria are introduced for proving animal play exists, and even when it is acknowledged, the research more often than not cannibalizes its own insights by trying to demonstrate that play must have some long-term survival or reproductive function.

Despite all this, those who do look into the matter are invariably forced to the conclusion that play does exist across the animal universe. And exists not just among such notoriously frivolous creatures as monkeys, dolphins, or puppies, but among such unlikely species as frogs, minnows, salamanders, fiddler crabs, and yes, even ants—which not only engage in frivolous activities as individuals, but also have been observed since the nineteenth century to arrange mock-wars, apparently just for the fun of it.

Why do animals play? Well, why shouldn’t they? The real question is: Why does the existence of action carried out for the sheer pleasure of acting, the exertion of powers for the sheer pleasure of exerting them, strike us as mysterious? What does it tell us about ourselves that we instinctively assume that it is?

Survival of the Misfits

The tendency in popular thought to view the biological world in economic terms was present at the nineteenth-century beginnings of Darwinian science. Charles Darwin, after all, borrowed the term “survival of the fittest” from the sociologist Herbert Spencer, that darling of robber barons. Spencer, in turn, was struck by how much the forces driving natural selection in On the Origin of Species jibed with his own laissez-faire economic theories. Competition over resources, rational calculation of advantage, and the gradual extinction of the weak were taken to be the prime directives of the universe.

The stakes of this new view of nature as the theater for a brutal struggle for existence were high, and objections registered very early on. An alternative school of Darwinism emerged in Russia emphasizing cooperation, not competition, as the driver of evolutionary change. In 1902 this approach found a voice in a popular book, Mutual Aid: A Factor of Evolution, by naturalist and revolutionary anarchist pamphleteer Peter Kropotkin. In an explicit riposte to social Darwinists, Kropotkin argued that the entire theoretical basis for Social Darwinism was wrong: those species that cooperate most effectively tend to be the most competitive in the long run. Kropotkin, born a prince (he renounced his title as a young man), spent many years in Siberia as a naturalist and explorer before being imprisoned for revolutionary agitation, escaping, and fleeing to London. Mutual Aid grew from a series of essays written in response to Thomas Henry Huxley, a well-known Social Darwinist, and summarized the Russian understanding of the day, which was that while competition was undoubtedly one factor driving both natural and social evolution, the role of cooperation was ultimately decisive.

The Russian challenge was taken quite seriously in twentieth-century biology—particularly among the emerging subdiscipline of evolutionary psychology—even if it was rarely mentioned by name. It came, instead, to be subsumed under the broader “problem of altruism”—another phrase borrowed from the economists, and one that spills over into arguments among “rational choice” theorists in the social sciences. This was the question that already troubled Darwin: Why should animals ever sacrifice their individual advantage for others? Because no one can deny that they sometimes do. Why should a herd animal draw potentially lethal attention to himself by alerting his fellows a predator is coming? Why should worker bees kill themselves to protect their hive? If to advance a scientific explanation of any behavior means to attribute rational, maximizing motives, then what, precisely, was a kamikaze bee trying to maximize?

We all know the eventual answer, which the discovery of genes made possible. Animals were simply trying to maximize the propagation of their own genetic codes. Curiously, this view—which eventually came to be referred to as neo-Darwinian—was developed largely by figures who considered themselves radicals of one sort or another. Jack Haldane, a Marxist biologist, was already trying to annoy moralists in the 1930s by quipping that, like any biological entity, he’d be happy to sacrifice his life for “two brothers or eight cousins.” The epitome of this line of thought came with militant atheist Richard Dawkins’s book The Selfish Gene—a work that insisted all biological entities were best conceived of as “lumbering robots,” programmed by genetic codes that, for some reason no one could quite explain, acted like “successful Chicago gangsters,” ruthlessly expanding their territory in an endless desire to propagate themselves. Such descriptions were typically qualified by remarks like, “Of course, this is just a metaphor, genes don’treally want or do anything.” But in reality, the neo-Darwinists were practically driven to their conclusions by their initial assumption: that science demands a rational explanation, that this means attributing rational motives to all behavior, and that a truly rational motivation can only be one that, if observed in humans, would normally be described as selfishness or greed. As a result, the neo-Darwinists went even further than the Victorian variety. If old-school Social Darwinists like Herbert Spencer viewed nature as a marketplace, albeit an unusually cutthroat one, the new version was outright capitalist. The neo-Darwinists assumed not just a struggle for survival, but a universe of rational calculation driven by an apparently irrational imperative to unlimited growth.

This, anyway, is how the Russian challenge was understood. Kropotkin’s actual argument is far more interesting. Much of it, for instance, is concerned with how animal cooperation often has nothing to do with survival or reproduction, but is a form of pleasure in itself. “To take flight in flocks merely for pleasure is quite common among all sorts of birds,” he writes. Kropotkin multiplies examples of social play: pairs of vultures wheeling about for their own entertainment, hares so keen to box with other species that they occasionally (and unwisely) approach foxes, flocks of birds performing military-style maneuvers, bands of squirrels coming together for wrestling and similar games:

We know at the present time that all animals, beginning with the ants, going on to the birds, and ending with the highest mammals, are fond of plays, wrestling, running after each other, trying to capture each other, teasing each other, and so on. And while many plays are, so to speak, a school for the proper behavior of the young in mature life, there are others which, apart from their utilitarian purposes, are, together with dancing and singing, mere manifestations of an excess of forces—“the joy of life,” and a desire to communicate in some way or another with other individuals of the same or of other species—in short, a manifestation of sociability proper, which is a distinctive feature of all the animal world.

To exercise one’s capacities to their fullest extent is to take pleasure in one’s own existence, and with sociable creatures, such pleasures are proportionally magnified when performed in company. From the Russian perspective, this does not need to be explained. It is simply what life is. We don’t have to explain why creatures desire to be alive. Life is an end in itself. And if what being alive actually consists of is having powers—to run, jump, fight, fly through the air—then surely the exercise of such powers as an end in itself does not have to be explained either. It’s just an extension of the same principle.

Friedrich Schiller had already argued in 1795 that it was precisely in play that we find the origins of self-consciousness, and hence freedom, and hence morality. “Man plays only when he is in the full sense of the word a man,” Schiller wrote in his On the Aesthetic Education of Man, “and he is only wholly a Man when he is playing.” If so, and if Kropotkin was right, then glimmers of freedom, or even of moral life, begin to appear everywhere around us.

It’s hardly surprising, then, that this aspect of Kropotkin’s argument was ignored by the neo-Darwinists. Unlike “the problem of altruism,” cooperation for pleasure, as an end in itself, simply could not be recuperated for ideological purposes. In fact, the version of the struggle for existence that emerged over the twentieth century had even less room for play than the older Victorian one. Herbert Spencer himself had no problem with the idea of animal play as purposeless, a mere enjoyment of surplus energy. Just as a successful industrialist or salesman could go home and play a nice game of cribbage or polo, why should those animals that succeeded in the struggle for existence not also have a bit of fun? But in the new full-blown capitalist version of evolution, where the drive for accumulation had no limits, life was no longer an end in itself, but a mere instrument for the propagation of DNA sequences—and so the very existence of play was something of a scandal.

Why Me?

It’s not just that scientists are reluctant to set out on a path that might lead them to see play—and therefore the seeds of self-consciousness, freedom, and moral life—among animals. Many are finding it increasingly difficult to come up with justifications for ascribing any of these things even to human beings. Once you reduce all living beings to the equivalent of market actors, rational calculating machines trying to propagate their genetic code, you accept that not only the cells that make up our bodies, but whatever beings are our immediate ancestors, lacked anything even remotely like self-consciousness, freedom, or moral life—which makes it hard to understand how or why consciousness (a mind, a soul) could ever have evolved in the first place.

American philosopher Daniel Dennett frames the problem quite lucidly. Take lobsters, he argues—they’re just robots. Lobsters can get by with no sense of self at all. You can’t ask what it’s like to be a lobster. It’s not like anything. They have nothing that even resembles consciousness; they’re machines. But if this is so, Dennett argues, then the same must be assumed all the way up the evolutionary scale of complexity, from the living cells that make up our bodies to such elaborate creatures as monkeys and elephants, who, for all their apparently human-like qualities, cannot be proved to think about what they do. That is, until suddenly, Dennett gets to humans, which—while they are certainly gliding around on autopilot at least 95 percent of the time—nonetheless do appear to have this “me,” this conscious self grafted on top of them, that occasionally shows up to take supervisory notice, intervening to tell the system to look for a new job, quit smoking, or write an academic paper about the origins of consciousness. In Dennett’s formulation,

Yes, we have a soul. But it’s made of lots of tiny robots. Somehow, the trillions of robotic (and unconscious) cells that compose our bodies organize themselves into interacting systems that sustain the activities traditionally allocated to the soul, the ego or self. But since we have already granted that simple robots are unconscious (if toasters and thermostats and telephones are unconscious), why couldn’t teams of such robots do their fancier projects without having to compose me? If the immune system has a mind of its own, and the hand–eye coordination circuit that picks berries has a mind of its own, why bother making a super-mind to supervise all this?

Dennett’s own answer is not particularly convincing: he suggests we develop consciousness so we can lie, which gives us an evolutionary advantage. (If so, wouldn’t foxes also be conscious?) But the question grows more difficult by an order of magnitude when you ask how it happens—the “hard problem of consciousness,” as David Chalmers calls it. How do apparently robotic cells and systems combine in such a way as to have qualitative experiences: to feel dampness, savor wine, adore cumbia but be indifferent to salsa? Some scientists are honest enough to admit they don’t have the slightest idea how to account for experiences like these, and suspect they never will.

Link: Life as a Nonviolent Psychopath

In 2005, James Fallon’s life started to resemble the plot of a well-honed joke or big-screen thriller: A neuroscientist is working in his laboratory one day when he thinks he has stumbled upon a big mistake. He is researching Alzheimer’s and using his healthy family members’ brain scans as a control, while simultaneously reviewing the fMRIs of murderous psychopaths for a side project. It appears, though, that one of the killers’ scans has been shuffled into the wrong batch.

The scans are anonymously labeled, so the researcher has a technician break the code to identify the individual in his family, and place his or her scan in its proper place. When he sees the results, however, Fallon immediately orders the technician to double check the code. But no mistake has been made: The brain scan that mirrors those of the psychopaths is his own.

After discovering that he had the brain of a psychopath, Fallon delved into his family tree and spoke with experts, colleagues, relatives, and friends to see if his behavior matched up with the imaging in front of him. He not only learned that few people were surprised at the outcome, but that the boundary separating him from dangerous criminals was less determinate than he presumed. Fallon wrote about his research and findings in the book The Psychopath Inside: A Neuroscientist’s Personal Journey Into the Dark Side of the Brain, and we spoke about the idea of nature versus nurture, and what—if anything—can be done for people whose biology might betray their behavior.


One of the first things you talk about in your book is the often unrealistic or ridiculous ways that psychopaths are portrayed in film and television. Why did you decide to share your story and risk being lumped in with all of that?

I’m a basic neuroscientist—stem cells, growth factors, imaging genetics—that sort of thing. When I found out about my scan, I kind of let it go after I saw that the rest of my family’s were quite normal. I was worried about Alzheimer’s, especially along my wife’s side, and we were concerned about our kids and grandkids. Then my lab was busy doing gene discovery for schizophrenia and Alzheimer’s and launching a biotech start-up from our research on adult stem cells. We won an award and I was so involved with other things that I didn’t actually look at my results for a couple of years.

This personal experience really had me look into a field that I was only tangentially related to, and burnished into my mind the importance of genes and the environment on a molecular level. For specific genes, those interactions can really explain behavior. And what is hidden under my personal story is a discussion about the effect of bullying, abuse, and street violence on kids.

You used to believe that people were roughly 80 percent the result of genetics, and 20 percent the result of their environment. How did this discovery cause a shift in your thinking?

I went into this with the bias of a scientist who believed, for many years, that genetics were very, very dominant in who people are—that your genes would tell you who you were going to be. It’s not that I no longer think that biology, which includes genetics, is a major determinant; I just never knew how profoundly an early environment could affect somebody.

While I was writing this book, my mother started to tell me more things about myself. She said she had never told me or my father how weird I was at certain points in my youth, even though I was a happy-go-lucky kind of kid. And as I was growing up, people all throughout my life said I could be some kind of gang leader or Mafioso don because of certain behavior. Some parents forbade their children from hanging out with me. They’d wonder how I turned out so well—a family guy, successful, professional, never been to jail and all that.

I asked everybody that I knew, including psychiatrists and geneticists that have known me for a long time, and knew my bad behavior, what they thought. They went through very specific things that I had done over the years and said, “That’s psychopathic.” I asked them why they didn’t tell me and they said, “We did tell you. We’ve all been telling you.” I argued that they had called me “crazy,” and they all said, “No. We said you’re psychopathic.”

I found out that I happened to have a series of genetic alleles, “warrior genes,” that had to do with serotonin and were thought to be at risk for aggression, violence, and low emotional and interpersonal empathy—if you’re raised in an abusive environment. But if you’re raised in a very positive environment, that can have the effect of offsetting the negative effects of some of the other genes.

I had some geneticists and psychiatrists who didn’t know me examine me independently, and look at the whole series of disorders I’ve had throughout my life. None of them have been severe; I’ve had the mild form of things like anxiety disorder and OCD, but it lined up with my genetics.

The scientists said, “For one, you might never have been born.” My mother had miscarried several times and there probably were some genetic errors. They also said that if I hadn’t been treated so well, I probably wouldn’t have made it out of being a teenager. I would have committed suicide or have gotten killed, because I would have been a violent guy.

How did you react to hearing all of this?

I said, “Well, I don’t care.” And they said, “That proves that you have a fair dose of psychopathy.” Scientists don’t like to be wrong, and I’m narcissistic so I hate to be wrong, but when the answer is there before you, you have to suck it up, admit it, and move on. I couldn’t.

I started reacting with narcissism, saying, “Okay, I bet I can beat this. Watch me and I’ll be better.” Then I realized my own narcissism was driving that response. If you knew me, you’d probably say, “Oh, he’s a fun guy”–or maybe, “He’s a big-mouth and a blowhard narcissist”—but I also think you’d say, “All in all, he’s interesting, and smart, and okay.” But here’s the thing—the closer to me you are, the worse it gets. Even though I have a number of very good friends, they have all ultimately told me over the past two years when I asked them—and they were consistent even though they hadn’t talked to each other—that I do things that are quite irresponsible. It’s not like I say, Go get into trouble. I say, Jump in the water with me.

What’s an example of that, and how do you come back from hurting someone in that way?

For me, because I need these buzzes, I get into dangerous situations. Years ago, when I worked at the University of Nairobi Hospital, a few doctors had told me about AIDS in the region as well as the Marburg virus. They said a guy had come in who was bleeding out of his nose and ears, and that he had been up in the Elgon, in the Kitum Caves. I thought, “Oh, that’s where the elephants go,” and I knew I had to visit. I would have gone alone, but my brother was there. I told him it was an epic trek to where the old matriarch elephants went to retrieve minerals in the caves, but I didn’t mention anything else.

When we got there, there was a lot of rebel activity on the mountain, so there was nobody in the park except for one guard. So we just went in. There were all these rare animals and it was tremendous, but also, this guy had died from Marburg after being here, and nobody knew exactly how he’d gotten it. I knew his path and followed it to see where he camped.

That night, we wrapped ourselves around a fire because there were lions and all these other animals. We were jumping around and waving sticks on fire at the animals in the absolute dark. My brother was going crazy and I joked, “I have to put my head inside of yours because I have a family and you don’t, so if a lion comes and bites one of our necks, it’s gotta be you.”

Again, I was joking around, but it was a real danger. The next day, we walked into the Kitum Caves and you could see where rocks had been knocked over by the elephants.  There was also the smell of all of this animal dung—and that’s where the guy got the Marburg; scientists didn’t know whether it was the dung or the bats.

A bit later, my brother read an article in The New Yorker about Marburg, which inspired the movieOutbreak. He asked me if I knew about it. I said, “Yeah. Wasn’t it exciting? Nobody gets to do this trip.” And he called me names and said, “Not exciting enough. We could’ve gotten Marburg; we could have gotten killed every two seconds.” All of my brothers have a lot of machismo and brio; you’ve got to be a tough guy in our family. But deep inside, I don’t think that my brother fundamentally trusts me after that. And why should he, right? To me, it was nothing.

After all of this research, I started to think of this experience as an opportunity to do something good out of being kind of a jerk my entire life. Instead of trying to fundamentally change—because it’s very difficult to change anything—I wanted to use what could be considered faults, like narcissism, to an advantage; to do something good.

What has that involved?

I started with simple things of how I interact with my wife, my sister, and my mother. Even though they’ve always been close to me, I don’t treat them all that well. I treat strangers pretty well—really well, and people tend to like me when they meet me—but I treat my family the same way, like they’re just somebody at a bar. I treat them well, but I don’t treat them in a special way. That’s the big problem.

I asked them this—it’s not something a person will tell you spontaneously—but they said, ”I give you everything. I give you all this love and you really don’t give it back.” They all said it, and that sure bothered me. So I wanted to see if I could change. I don’t believe it, but I’m going to try.

In order to do that, every time I started to do something, I had to think about it, look at it, and go: No. Don’t do the selfish thing or the self-serving thing. Step-by-step, that’s what I’ve been doing for about a year and a half and they all like it. Their basic response is: We know you don’t really mean it, but we still like it.

I told them, “You’ve got to be kidding me. You accept this? It’s phony!” And they said, “No, it’s okay. If you treat people better it means you care enough to try.” It blew me away then and still blows me away now. 

But treating everyone the same isn’t necessarily a bad thing, is it? Is it just that the people close to you want more from you?

Yes. They absolutely expect and demand more. It’s a kind of cruelty, a kind of abuse, because you’re not giving them that love. My wife to this day says it’s hard to be with me at parties because I’ve got all these people around me, and I’ll leave her or other people in the cold. She is not a selfish person, but I can see how it can really work on somebody.

Related 

I gave a talk two years ago in India at the Mumbai LitFest on personality disorders and psychopathy, and we also had a historian from Oxford talk about violence against women in terms of the brain and social development. After it was over, a woman came up to me and asked if we could talk. She was a psychiatrist but also a science writer and said, “You said that you live in a flat emotional world—that is, that you treat everybody the same. That’s Buddhist.” I don’t know anything about Buddhism but she continued on and said, “It’s too bad that the people close to you are so disappointed in being close to you. Any learned Buddhist would think this was great.” I don’t know what to do with that.

Sometimes the truth is not just that it hurts, but that it’s just so disappointing. You want to believe in romance and have romance in your life—even the most hardcore, cold intellectual wants the romantic notion. It kind of makes life worth living. But with these kinds of things, you really start thinking about what a machine it means we are—what it means that some of us don’t need those feelings, while some of us need them so much. It destroys the romantic fabric of society in a way.

So what I do, in this situation, is think: How do I treat the people in my life as if I’m their son, or their brother, or their husband? It’s about going the extra mile for them so that they know I know this is the right thing to do. I know when the situation comes up, but my gut instinct is to do something selfish. Instead, I slow down and try to think about it. It’s like dumb behavioral modification; there’s no finesse to this, but I said, well, why does there have to be finesse? I’m trying to treat it as a straightaway thing, when the situation comes up, to realize there’s a chance that I might be wrong, or reacting in a poor way, or without any sort of love—like a human.

A few years ago there was an article in The New York Times called, “Can You Call a 9-Year-Old a Psychopath?" The subject was a boy named Michael whose family was concerned about him—he’d been diagnosed with several disorders and eventually deemed a possible psychopath by Dan Waschbusch, a researcher at Florida International University who studies "callous unemotional children." Dr. Waschbusch examines these children in hopes of finding possible treatment or rehabilitation. You mentioned earlier that you don’t believe people can fundamentally change; what is your take on this research?

In the 70’s, when I was still a post-doc student and a young professor, I started working with some psychiatrists and neurologists who would tell me that they could identify a probable psychopath when he or she was only 2 or 3 years old. I asked them why they didn’t tell the parents and they said, “There’s no way I’m going to tell anybody. First of all, you can’t be sure; second of all, it could destroy the kid’s life; and third of all, the media and the whole family will be at your door with sticks and knives.” So, when Dr. Waschbusch came out two years ago, it was like, “My god. He actually said it.” This was something that all psychiatrists and neurologists in the field knew—especially if they were pediatric psychologists and had the full trajectory of a kid’s life. It can be recognized very, very early—certainly before 9-years-old—but by that time the question of how to un-ring the bell is a tough one.

My bias is that even though I work in growth factors, plasticity, memory, and learning, I think the whole idea of plasticity in adults—or really after puberty—is so overblown. No one knows if the changes that have been shown are permanent and it doesn’t count if it’s only temporary. It’s like the Mozart Effect—sure, there are studies saying there is plasticity in the brain using a sound stimulation or electrical stimulation, but talk to this person in a year or two. Has anything really changed? An entire cottage industry was made from playing Mozart to pregnant women’s abdomens. That’s how the idea of plasticity gets out of hand. I think people can change if they devote their whole life to the one thing and stop all the other parts of their life, but that’s what people can’t do. You can have behavioral plasticity and maybe change behavior with parallel brain circuitry, but the number of times this happens is really rare.

So I really still doubt plasticity. I’m trying to do it by devoting myself to this one thing—to being a nice guy to the people that are close to me—but it’s a sort of game that I’m playing with myself because I don’t really believe it can be done, and it’s a challenge.

In some ways, though, the stakes are different for you because you’re not violent—and isn’t that the concern? Relative to your own life, your attempts to change may positively impact your relationships with your friends, family, and colleagues. But in the case of possibly violent people, they may harm others.

The jump from being a “prosocial” psychopath or somebody on the edge who doesn’t act out violently, to someone who really is a real, criminal predator is not clear. For me, I think I was protected because I was brought up in an upper-middle-class, educated environment with very supportive men and women in my family. So there may be a mass convergence of genetics and environment over a long period of time. But what would happen if I lost my family or lost my job; what would I then become? That’s the test.

For people who have the fundamental biology—the genetics, the brain patterns, and that early existence of trauma—first of all, if they’re abused they’re going to be pissed off and have a sense of revenge: I don’t care what happens to the world because I’m getting even. But a real, primary psychopath doesn’t need that. They’re just predators who don’t need to be angry at all; they do these things because of some fundamental lack of connection with the human race, and with individuals, and so on.

Someone who has money, and sex, and rock and roll, and everything they want may still be psychopathic—but they may just manipulate people, or use people, and not kill them. They may hurt others, but not in a violent way. Most people care about violence—that’s the thing. People may say, “Oh, this very bad investment counselor was a psychopath”—but the essential difference in criminality between that and murder is something we all hate and we all fear. It just isn’t known if there is some ultimate trigger. 

Link: The New Revolutionaries: Climate Scientists Demand Radical Change

To prevent catastrophic climate change, Britain’s top experts call for emissions cuts that require “revolutionary change to the political and economic hegemony.”

“Today, after two decades of bluff and lies, the remaining 2°C budget demands revolutionary change to the political and economic hegemony.”[1] That was in a blog posting last year by Kevin Anderson, Professor of Energy and Climate Change at Manchester University. One of Britain’s most eminent climate scientists, Anderson is also Deputy Director of the Tyndall Centre for Climate Change Research.

Or, we might take this blunt message, from an interview in November: “We need bottom-up and top-down action. We need change at all levels.”[2] Uttering those words was Tyndall Centre senior research fellow and Manchester University reader Alice Bows-Larkin. Anderson and Bows-Larkin are world-leading specialists on the challenges of climate change mitigation.

During December, the two were key players in a Radical Emission Reduction Conference, sponsored by the Tyndall Centre and held in the London premises of Britain’s most prestigious scientific institution, the Royal Society. The “radicalism” of the conference title referred to a call by the organisers for annual emissions cuts in Britain of at least 8 per cent – twice the rate commonly cited as possible within today’s economic and political structures.

The conference drew keen attention and wide coverage. In Sydney, the Murdoch-owned Daily Telegraph described the participants as “unhinged” and “eco-idiots,” going on to quote a “senior climate change adviser” for Shell Oil as stating:

“This was a room of catastrophists (as in ‘catastrophic global warming’), with the prevailing view…that the issue could only be addressed by the complete transformation of the global energy and political systems…a political ideology conference.”[3]

Indeed. The traditional “reticence” of scientists, which in the past has seen them mostly stick to their specialities and avoid comment on the social and political implications of their work, is no longer what it was.

Angered

Climate scientists have been particularly angered by the refusal of governments to act on repeated warnings about the dangers of climate change. Adding to the researchers’ bitterness, in more than a few cases, have been demands placed on them to soft-pedal their conclusions so as to avoid showing up ministers and policy-makers. Pressures to avoid raising “fundamental and uncomfortable questions” can be strong, Anderson explained to an interviewer last June.

“Scientists are being cajoled into developing increasingly bizarre sets of scenarios…that are able to deliver politically palatable messages. Such scenarios underplay the current emissions growth rate, assume ludicrously early peaks in emissions and translate commitments ‘to stay below [warming of] 2°C’ into a 60 to 70 per cent chance of exceeding 2°C.”[4]

Anderson and Bows-Larkin have been able to defy such pressures to the extent of co-authoring two remarkable, related papers, published by the Royal Society in 2008 and 2011.

In the second of these, the authors draw a distinction between rich and poor countries (technically, the UN’s “Annex 1” and “non-Annex 1” categories), while calculating the rates of emissions reduction in each that would be needed to keep average global temperatures within 2 degrees of pre-industrial levels.

The embarrassing news for governments is that the rich countries of Annex 1 would need to start immediately to cut their emissions at rates of about 11 per cent per year. That would allow the non-Annex 1 countries to delay their “peak emissions” to 2020, while developing their economies and raising living standards.

But the poor countries too would then have to start cutting their emissions at unprecedented rates – and the chance of exceeding 2 degrees of warming would still be around 36 per cent.[5] Even for a 50 per cent chance of exceeding 2 degrees, the rich countries would need to cut their emissions each year by 8-10 per cent.[6]

As Anderson points out, it is virtually impossible to find a mainstream economist who would see annual emissions reductions of more than 3-4 per cent as compatible with anything except severe recession, given an economy constituted along present lines.[7]

Four degrees?

What if the world kept its market-based economies, and after a peak in 2020, started reducing its emissions by this “allowable” 3-4 per cent? In their 2008 paper, Anderson and Bows-Larkin present figures that suggest a resulting eventual level of atmospheric carbon dioxide equivalent of 600-650 parts per million.[8] Climate scientist Malte Meinshausen estimates that 650 ppm would yield a 40 per cent chance of exceeding not just two degrees, but four.[9]

Anderson in the past has spoken out on what we might expect a “four-degree” world to be like. In a public lecture in October 2011 he described it as “incompatible with organised global community”, “likely to be beyond ‘adaptation’” and “devastating to the majority of ecosystems”. Moreover, a four-degree world would have “a high probability of not being stable”. That is, four degrees would be an interim temperature on the way to a much higher equilibrium level.[10]

Reported in the Scotsman newspaper in 2009, he focused on the human element:

“I think it’s extremely unlikely that we wouldn’t have mass death at 4C. If you have got a population of nine billion by 2050 and you hit 4C, 5C or 6C, you might have half a billion people surviving.”[11]

No wonder intelligent people are in revolt.

Market methods?

Anderson has also emerged as a powerful critic of the orthodoxy that emissions reduction must be based on market methods if it is to have a chance of working. His views on this point were brought into focus last October in a sharp rejoinder to United Nations climate-change chief – and market enthusiast – Rajendra Pachauri:

“I disagree strongly with Dr Pachauri’s optimism about markets and prices delivering on the international community’s 2°C commitments,” the British Independent quoted Anderson as saying. “I hold that such a market-based approach is doomed to failure and is a dangerous distraction from a comprehensive regulatory and standard-based framework.”[12]

Anderson’s critique of market-led abatement schemes centres on his conclusion that the two-degree threshold “is no longer deliverable through gradual mitigation, but only through deep cuts in emissions, i.e., non-marginal reductions at almost step-change levels.

“By contrast, a fundamental premise of contemporary neo-classical economics is that markets (including carbon markets) are only efficient at allocating scarce resources when the changes being considered are very small – i.e.marginal.

“For a good chance of staying below two degrees Celsius,” Anderson notes, “future emissions from the EU’s energy system … need to reduce at rates of around 10 per cent per annum – mitigation far below what marginal markets can reasonably be expected to deliver.”[13]

If an attempt were made to secure these reductions through cap-and-trade methods, he argues, “the price would almost certainly be beyond anything described as marginal (probably many hundreds of euros per tonne) – hence the great ‘efficiency’ and ‘least-cost’ benefits claimed for markets would no longer apply.”[14]

At the same time, the equity and social justice implications would be devastating. “A carbon price can always be paid by the wealthy,” Anderson points out.

“We may buy a slightly more efficient 4WD/SUV, cut back a little on our frequent flying, consider having a smaller second home…but overall we’d carry on with our business as usual. Meanwhile, the poorer sections of our society…would have to cut back still further in heating their inadequately insulated and badly designed rented properties.”[15]

Energy agenda

In the short-term, Anderson argues, a two-degree energy agenda requires “rapid and deep reductions in energy demand, beginning immediately and continuing for at least two decades.” This could buy time while a low-carbon energy supply system is constructed. A “radical plan” for emissions reduction, he indicates, is among the projects under way within the Tyndall Centre.[16]

The cost of emissions cuts, he insists, needs to fall on “those people primarily responsible for emitting.”[17] As quoted by writer Naomi Klein, Anderson estimates that 1-5 per cent of the population is responsible for 40-60 per cent of carbon pollution.[18]

While not rejecting price mechanisms in a supporting role, Anderson argues that the required volume of emissions cuts can only be achieved through stringent and increasingly demanding regulations. His “provisional and partial list” includes the following:

  •  Strict energy/emission standards for appliances with a clear long-term market signal of the amount by which the standards would annually tighten; e.g. 100gC02/km for all new cars commencing 2015 and reducing at 10 per cent each year through to 2030.
  • Strict energy supply standards; e.g. for electricity 350gCO2/kWh as the mean emissions level of a supplier’s portfolio of power stations; tightened at ~10 per cent per annum.
  • A programme of rolling out stringent energy/emission standards for industry equipment.
  • Stringent minimum efficiency standards for all properties for sale or rent.
  • World leading low-energy standards for all new-build houses, offices etc.

Enforcing these radical standards, he argues, “could be achieved, at least initially, with existing technologies and at little to no additional cost.”[19]

Economic growth

For a reasonable chance of keeping warming below 2 degrees, Anderson maintains, wealthier countries would need to forgo economic growth for at least ten to twenty years. Here, he bases himself on the conventional wisdom of “integrated assessment modellers”[20] – and arguably gets things quite wrong. Leading American climate blogger Joseph Romm last year came to sharply different conclusions:

“The IPCC’s last review of the mainstream economic literature found that even for stabilization at CO2 levels as low as 350 ppm, ‘global average macro-economic costs’ in 2050 correspond to ‘slowing average annual global GDP growth by less than 0.12 percentage points’.  It should be obvious the net cost is low. Energy use is responsible for the overwhelming majority of emissions, and energy costs are typically about 10 percent of GDP.”[21]

At a time when jobless workers abound, and large amounts of industrial capacity lie unused, mobilising resources and labour to replace polluting equipment could sharply increase Gross Domestic Product. Moreover, account needs to be taken of the absurdities of GDP itself – as a measurement tool that counts as useful activity building prisons and developing weapons systems. Anderson senses some of these contradictions when he states:

“Mitigation rates well above the economists’ 3 to 4 per cent per annum range may yet prove compatible with some form of economic prosperity.”[22]

Indeed, reconstructing our inefficient, polluting industrial system could allow the great majority of us to lead richer, more rewarding lives.

Reprisals

Where Anderson is not wrong is in anticipating, at various points in his blogging and interviews, that any serious move to cut emissions at the required rates will encounter fierce resistance. Huge industrial assets, primarily fossil-fuelled generating plant, would be “stranded”. Already-proven reserves of coal, oil and gas would need to be left in the ground.

Like the scientists accused in 2009 in the spurious “Climategate” affair, the people who spoke out at the Radical Emission Reduction Conference can now expect to feel the blow-torch of conservative reprisals.

Along with Anderson and Bows-Larkin, a particular target is likely to be Tyndall Centre Director Professor Corinne Le Quéré, who presented the scientific case for rapid emissions reduction. Four Australian academics who contributed via weblink, including noted climate scientist Mark Diesendorf, have already come under venomous personal attack in the Daily Telegraph.[23]

The “offence” committed by the Tyndall researchers is much greater than the loosely phrased e-mails that were seized on as the pretext for “Climategate.” With others in the climate-science community, these courageous people have shredded the pretence that polluter corporations and their supporting-act governments care a damn about preserving nature, civilisation, and human life.

Link: Antibiotics, Capitalism and the Failure of the Market

Last March 2013, England’s Chief Medical Officer, Dame Sally Davies gave the stark warning that antimicrobial resistance poses “a catastrophic threat” Unless we act now, she argued, “any one of us could go into hospital in 20 years for minor surgery and die because of an ordinary infection that can’t be treated by antibiotics. And routine operations like hip replacements or organ transplants could be deadly because of the risk of infection.”[1]

Over billions of years, bacteria have encountered a multitude of naturally occurring antibiotics and consequentially developed resistance mechanisms to survive. The primary emergence of resistance is random, coming about by DNA mutation or gene exchange with other bacteria. However, the further use of antibiotics then favours the spread of those bacteria that have become resistant.

More than 70% of pathogenic bacteria that cause healthcare acquired infections are resistant to at least one the drugs most commonly used to treat them.[2][3] Increasing resistance in bacteria like Eschericha coli (E. coli) is a growing public health concern due to the very limited therapy options for infections caused by E. coli. This is particularly so in E .coli that is resistant to carbapenem antibiotics, the drugs of last resort.

The emergence of resistance is complex issue involving inappropriate and over use of antimicrobials in humans and animals. Antibiotics may be administered by health professionals or farmers when they are not required or patients may take only part of a full course of treatment. This provides bacteria the opportunity to encounter the otherwise life-saving drugs, at ineffective levels and survive mutation to produce resistant strains. Once created, these resistant strains have been allowed to spread by poor infection control and regional surveillance procedures.

These two problems are easily solved by educating healthcare professionals, patients and animal keepers about the importance of antibiotic treatment regimens and keeping to them. Advocating good infection control procedures in hospitals and investment in surveillance programs monitoring patterns of resistance locally and across the country would reduce the spread of infection. However, the biggest problem is capitalism and the fact that there is not a supply of new antimicrobials.

Between 1929 and the 1970s pharmaceutical companies developed more than twenty new classes of antimicrobials.[4][5] Since the 1970s only two new categories of antimicrobials have arrived.[6][7] Today the pipeline for new antibiotic classes active against highly resistant Gram negative bacteria is dry [8][9][10] the only novel category in early clinical development has recently been withdrawn.[9][11]

For the last seventy years the human race has kept itself ahead of resistant bacteria by going back into the laboratory and developing the next generation of antimicrobials. However, due to a failure of the market, pharmaceutical companies are no longer interested in developing antibiotics.

Despite the warnings from Dame Sally Davies, drug companies have pulled back from antimicrobial research because there is no profit to be made from it. When used appropriately a single £100 course of antibiotics will save someone’s life. However, that clinical effectiveness and short-term use has the unfortunate consequence of making antimicrobials significantly less profitable than the pharmaceuticals used in cancer therapy, which can cost £20,000 per year.

In our current system, a drug company’s return on their financial investment in antimicrobials is dependent on their volume of sales. A further problem arises when we factor in the educational programs aimed at teaching healthcare professionals and animal keepers to limit their use of antimicrobials. This combined with the relative unprofitability has produced a failure in the market and a paradox for capitalism.

A response commonly proposed by my fellow scientists, is that our government must provide incentives for pharmaceutical companies to develop new antimicrobial drugs. Suggestions are primarily focused around reducing the financial risk for drugs companies and include grants, prizes, tax breaks, creating public-private partnerships and increasing intellectual property protections. Further suggestions are often related to removing “red tape” and streamlining the drug approval and clinical trial requirements.

In September 2013 the Department of Health published its UK Five Year Antimicrobial Resistance Strategy.[12] The document called for “work to reform and harmonise regulatory regimes relating to the licencing and approval of antibiotics”, better collaboration “encouraging greater public-private investment in the discovery and development of a sustainable supply of effective new antimicrobials” and states that “Industry has a corporate and social responsibility to contribute to work to tackle antimicrobial resistance.”

I think we should have three major objections to these statements. One, the managers in the pharmaceutical industry do not have any responsibility to contribute to work to tackle antimicrobial resistance. They have a responsibility to practice within the law or be fined and make profit for shareholders or be replaced. It is the state that has the responsibility for the protection and wellbeing of its citizens.

Secondly, following this year’s horsemeat scandal we should object to companies cutting corners in attempt to increase profits. This leads on to the final objection, that by promoting public-private collaboration all the state is doing, is subsidising share holder profits by reducing the shareholder’s financial risk.

The market has failed and novel antimicrobials will require investment not based on a financial return from the volume of antibiotics sold but on the benefit for society of being free from disease.

John Maynard Keynes in his 1924, Sydney Ball Foundation Lecture at Cambridge, said “the important thing for government is not to do things which individuals are doing already, and to do them a little better or a little worse; but to do those things which at present are not done at all”.[13] Mariana Mazzucato in her 2013 book, The Entrepreneurial State, discusses how the state can lead innovation and criticises the risk and reward relationships in current public-private partnerships.[14] Mazzacuto argues that the state can be entrepreneurial and inventive and that we need to reinvent the state and government.

This praise of the potential of the state seems to be supported by the public as following announcements of energy price rises, in October 2013, a YouGov poll found that 12 to 1 people were against the NHS being run by the private sector; 67% in favour of Royal Mail being run in the public sector; 66% want railway companies to be nationalised and 68% are in favour of nationalised energy companies.[15]

We should support state funded professors, post-doctoral researchers and PhD students as scientists working within the public sector. They could study the mechanisms of drug entry into bacterial cells or screen natural antibiotic compounds. This could not be done on a shoestring budget and it would no doubt take years to build the infrastructure but we could do things like make the case for where the research took place.

Andrew Witty’s recent review of higher education and regional growth asked universities to become more involved in their local economies.[16] The state could choose to build laboratories in geographical areas neglected by private sector investment and help promote regional recovery. Even more radically, if novel antibiotics are produced for their social good rather than financial gain, they can be reserved indefinitely until a time of crisis.

With regard to democracy, patients and the general public could have a greater say in what is researched and to help shift us away from our reliance on the market to provide what society needs.  The market responds, not to what society needs, but to what will create the most profit. This is a reoccurring theme throughout science. I cannot begin to tell you how frequently I listen to case studies regarding parasites which only affect people in the developing world. Again, the people of the developing world have very little money so drug companies neglect to develop drugs as there is no source of profit. We should make the case for innovation not to be driven by greed but for the service of society and even our species.

Before Friedrich Hayek, John Desmond Bernal in his 1939 book, The Social Function to Science, argued for more spending on innovation as science was not merely an abstract intellectual enquiry but of real practical value.[17] Bernal placed science and technology as one of the driving forces of history. Why should we not follow that path?

Link: Homo Scientificus According to Beckett

DAVIDSON: The original title suggested to our speaker by our valiant organizer was Basic Research Responsibilities. The title submitted by the speaker to the calendar is ”Homo Scientificus According to Beckett”. As far as I know there are two Becketts in history. One of them got killed in a cathedral and the other got a Nobel Prize for writing plays. That’s all I know about the seminar and I’m looking forward to hearing it.

DELBRÜCK: In December 1970 Bill Beranek wrote me a letter saying that he wanted one of these sessions devoted to the subject: “The Responsibility of the Scientist to Society with Respect to Pure Basic Research”. He added a number of questions, which I will quickly answer, as best I can.

Q. 1: Is pure science to be regarded as overall beneficial to society?

A: It depends much on what you consider benefits. If you look at health, long life, transportation, communication, education, you might be tempted to say ”yes”. If you look at the enormous social-economic dislocations, and at strains on our psyches due to the imbalance between technical developments and our limited ability to adjust to the pace of change, you might be tempted to say “no”. Clearly, the present state of the world — to which science has contributed much — leaves a great deal to be desired, and much to be feared, so I write down:

(1) Q: SCIENCE BENEFICIAL? A: DOUBTFUL.

Q. 2: Is pure science to be considered as something potentially harmful?

A: Most certainly! Every child knows that it is potentially exceedingly harmful. Our lecture series here on environmental problems concerns just a small aspect. The menace of blowing ourselves up by atom bombs, doing ourselves in by chemical or biological warfare, or by population explosion is certainly with us. I consider the environment thing, a trivial question, by comparison, like housekeeping. In any home, the dishes have to be washed, the floors swept, the beds made, and there must be rules as to who is allowed to produce how much stink and noise, and where in the house. When the garbage piles up, these questions become pressing. But they are momentary problems. Once the house is in order, you still want to live in it, not just sit around enjoying its orderliness. I would be sorry to see Caltech move heavily into this type of applied research.

(2) Q: SCIENCE POTENTIALLY HARMFUL? A: DEFINITELY.

Q. 3: Should a scientist consider possible ramifications of his research and their effects on society, or is this something not only difficult to do but perhaps better done by others?

A: I think it is impossible for anybody, scientist or not, to foresee the ramifications. We might say that that is a definition of basic science. Vide Einstein’s discovery in 1905 of the equivalence of mass and energy and the development of atomic weaponry.

(3) Q: CONSIDER RAMIFICATIONS? A: IMPOSSIBLE.

So much for Bill‘s original questions in December.

I agreed to come to the lectures and then decide whether I thought I had something to contribute. After having listened to a series of lectures on environmental problems, such as lead poisoning, mercury poisoning, on smog, on waste disposal, on fuel additives, and to Dan Kevles’ and George Hammond’s more general talks, I told Bill that I had found the series interesting and worthwhile but that I felt most uneasy about where I might fit in. So he wrote me another letter. Tenacious guy. With more questions. These again I can answer in short order.

Q. 4: Why did you choose science as your life’s work?

A: I think the most relevant answer that I can give to this question is this: I found out at an early age that science is a haven for the timid, the freaks, the misfits. That is more true perhaps for the past than now. If you were a student in Göttingen in the 20’s and went to the seminar ”Structure of Matter” which was under the joint auspices of David Hilbert and Max Bon as you walked in there, you could well imagine that you were in a madhouse. Every one of the persons there was obviously some kind of a severe case. The least you could do was put on some kind of a stutter. Robert Oppenheimer as a graduate student found it expedient to develop a very elegant kind of stutter, the ”njum- njum-njum”-technique. Thus, if you were an oddball you felt at home.

(4)Q: WHY SCIENTIFIC CAREER? A: A HAVEN FOR FREAKS.

Q. 5: What is the history of your research?

A: Perhaps the most relevant aspect is that it throve under adversity. The two periods that I have in mind were (1) in Germany in the middle 30’s Under the Nazis when things became quite unpleasant and official seminars became dull. Many people emigrated, others did not leave but were not being permitted to come to official seminars. We had a little private club which I had organized and which met about once a week, mostly at my mother’s house. First just theoretical physicists (I was at that time a theoretical physicist), and then theoretical physicists and biologists. The discussions we had at that time have had a remarkable long-range effect, an effect which astonished us all. This was one adverse Situation. Like the great Plague in Florence in 1348 which is the background setting for Bocaccio’s Decahedron. The other one was in this country in the 40’s during the war. I came over in ‘37 and was in this Country during the war as an enemy alien. And as an enemy alien I secured a job as an instructor of physics at Vanderbilt University in Nashville, Tennessee. You might think that this was a very unpropitious place to be, but it worked out fine. I spent 7 1/2 years there. This situation gave me, in association with Luria (another enemy alien) and in close contact with Hershey (another misfit in society) sufficient leisure to do the first phase of phage research which has become a cornerstone of molecular genetics.

I would not want to generalize to the extent that adversity is the only road to effective innovative science or art, but the progress of science Is often spectacularly disorderly. James Joyce once commented that he survived by “cunning and exile” (and you might add. by a genius for borrowing money from a number of ladies). I got along all right with the head of the Physics Department at Vanderbilt. He wanted me to do as much physics teaching as possible and as little biology research as possible. I had the opposite desires. We understood each other’s attitudes and accommodated each other to a reasonable extent. So, things worked out quite well. At the end of the war I was the oldest instructor on the campus.

[5]Q: HISTORY OF YOUR RESEARCH? A: THROVE UNDER ADVERSITY.

Q. 6: Why do you think society should pay for basic research?

A: Did I say that society should pay for basic research? I didn’t. Society does so to a varying extent, and it always astonishes me that it does. It has been part of the current dogma that basic research is good for society but I would be the last to be dogmatic about the number of dollars society should put up for this goodness. Since I answered the first question with “Doubtful”, I cannot very well be emphatic in answer to this one.

[6]Q: SOCIETY PAY FOR RESEARCH? A: HOW MUCH?

Q.7: How much control do you feel society should have on deciding which questions you should ask in your research?

A: Society can, and does, and must control research enormously, negatively and positively, by selectively cutting off or supplying funds. At present it cuts — not so selectively. That is all right with me, as far as my own research is concerned. I certainly do not think society owes me a living, or support for my research. If it does not support my research, I can always do something else and not be worse off, perhaps better. However, the question, from society’s point of view, is exceedingly complicated. I have no strong views on the matter.

(7) Q: CONTROL OF RESEARCH BY SOCIETY? A: A COMPLICATED MATTER, LARGELY OF PROCEDURE.

Q8: Is there an unwritten scientific oath analogous to the Hippocratic oath which would ask all scientists to use their special expertise and way of thinking to guard against the bad effects of science on society, especially today when science is acknowledged to play such a large part in the lives of individuals?

A: The original Hippocratic oath, of course, says that you should keep the patient alive under all circumstances. Also that you shouldn’t be bribed, shouldn’t give poisons, should honor your teachers, and things like that, but essential1y to keep the patient alive. And that’s a reasonably well defined goal since keeping the patient alive is biologically unambiguous. But to use science for the good of society is not so well defined, therefore I think such an oath could never be written. The only unwritten oath is of course that you should be reasonably honest, and that is in fact carried out to the extent that, although many things that you read in the journals are wrong, it is assumed that the author at least believed that he was right. So much so that if somebody deliberately sets out to cheat he can get away with it for years. There are a number of celebrated cases of cheating or hoaxes that would make a long story. But our whole scientific discourse is based on the premise that everybody is trying at least to tell the truth, within the limits of his personality; that can be some limit.

[8]Q: HIPPOCRATIC OATH? A: IMPOSSIBLE TO BE UNAMBIGUOUS

Q. 9: Is science something we do mainly for its own sake, like art or music” or is it something we use as a tool for bettering our physical existence?

A: This is a question that turns me on. I think that it bristles with popular misconceptions about the nature of Homo scientificus, and therefore maybe I have something to say. Let me start by reading a few passages from a paper on this species, hitherto unpublished, written in 1942 by a rather perceptive friend … a non-scientist:

The species Homo scientificus constitute a branch of the family Homo modernibus, a species easy and interesting to observe but difficult and perplexing to understand. There are a number of varieties and sub-varieties ranging from the lowliest to the highest. We begin with the humble professorius scientificus, whose inclusion in this species is questionable, pass on up through the geologia and the large groups of the chemisto and biología, with their many hybrids, to the higher orders of the physicistus and mathematicus, and finally to the lordly theoretica physicistus. rarely seen in captivity.

Habitat: These animals range the North American and European continents, and are seldom seen in South America, Africa, or Asia, although a few isolated cases are known in Australia and Russia. [This was written in 1942.) Individuals of the lower orders thrive in most sections of Europe and America but those of the higher orders are to be found only in a few localities, where they live together in colonies. These colonies provide a valuable research field;’ here one can wander about noting the size, structure, and actions of these peculiar creatures. There is little to fear, for although they may ap- proach one with great curiosity, and attempt to lead one to the1r lairs, they are not known to be dangerous.

Description: Recent studies of this as yet little-understood s~ des have ascertained a number of characteristics by which they m~ be distinguished. The brain is large and often somewhat soft in spots. In some cases the head is covered with masses of thick, unkempt wool, in others it is utterly devoid of hair and shines like a doorknob. Sometimes there is hair on the face hut it never covers the nose. The body covering, when there is any, is without particular color or form, the general appearance is definitely shaggy. The male scientificus does not, like the cock or the lion or the bull, delight in flaunting elegantly before the female to catch her eye. Evidently the female is attracted by some other method. We are at a 1055 as to what this could be, although we have often observed the male scurrying after the female with a wuffley expression on this face. Sometimes he brings her a little gift, such as a bundle of bristles or a bright piece of cellophane, which she accepts tenderly and the trick is done. Occasionally an old king appears from the colony, surrounded by workers. He has soft grey hair on his face, and a pot belly. Scientificus is a voracious eater; this 15 not strange for he consumes a great deal of energy each day in playing. In fact, he is one of the best play- ing animals known.

The scientificus undoubtedly have a language of their own. They take pleasure in jabbering to each other and often one will stand several hours before a group, holding forth in a monologues; the listeners are for the most part quiet, and some may even be asleep. However meaningful this language may be to them, it is utterly incomprehensible to us. Perhaps the thing which endears this mysterious creature to us most is his disposition; although there exists a kind of slavery (the laboratorio assistantia being captured to do the dirty work), the scientificus does not prey on other animals of his species and he is neither cruel, sly, nor domineering. [The author had only studied the species for one year at that time-] He is an easygoing animal; he will not, for example, work hard to construct a good dwelling, but is content to live in a damp basement so long as he can spend most of the day sitting in the sun and rummaging among his strange possessions.

The paper then goes on into more detail about the biologia. We will let this suffice by way of a general description of Homo scientificus. The description is nice as far as it goes, but too superficial.

Now I want to switch gears and read another piece which I think goes to the heart of the matter. This is taken from the novel Molloy by Samuel Beckett. Beckett not only wrote plays, Happy Days, Krapp’s Last Tape, End Game, and Waiting for Godo" — but also a number of novels that are less well known. This one, Molloy, published in the ‘50s, concerns an exceedingly lonely and decrepit old man, and the whole book is a kind of a soliloquy that he writes down about his life. I have picked one episode that I hope will illustrate the point I want to make (without having to rub it in too much). There will be slides to go with this reading so as to make the argument perfectly clear. At the time of this episode Molloy is a beachcomber at some lonely place.

I took advantage of being at the seaside to lay in a store of sucking-stones. They were pebbles but I call them stones. Yes, on this occasion I laid in a considerable store. I distributed them equally between my four pockets, and sucked them turn and turn about. This raised a problem which I first solved in the following way. I had say sixteen stones, four in each of my four pockets these being the two pockets of my trousers and the two pockets of my greatcoat.

Taking a stone from the right pocket of my greatcoat, and putting it in my mouth, I replaced it in the right pocket of my greatcoat by a stone from the right pocket of my trousers, which I replaced by a stone from the left pocket of my trousers, which I replaced by a stone from the left pocket of my greatcoat, which I replaced by the stone which was in my mouth, as soon as I had finished sucking it. Thus there were still four stones in each of my four pockets, but not quite the same stones. And when the desire to suck took hold of me again, I drew again on the right pocket of my greatcoat, certain of not taking the same stone as the last time.  And while I sucked it I rearranged the other stones in the way I have just described. And so on.

But this solution did not satisfy me fully. For it did not escape me that, by an extraordinary hazard, the four stones circulating thus might always be the same four. In which case, far from sucking the sixteen stones turn and turn about, I was really only sucking four, always the same, turn and turn about. But I shuffled them well in my pockets, before I began to suck, and again, while I sucked, before transferring them, in the hope of obtaining a more general circulation of the stones from pocket to pocket. But this was only a makeshift that could not long content a man like me. So I began to look for something else.

And the first thing I hit upon was that I might do better to transfer the stones four by four, instead of one by one, that is to say, during the sucking, to take the three stones remaining in the right pocket of my greatcoat and replace them by the four in the right pocket of my trousers , and these by the four in the left pocket of my trousers, and these by the four in the left pocket of my greatcoat, and finally these by the three from the right pocket of my greatcoat, plus the one, as soon as I had finished sucking it, which was in my mouth.  Yes, it seemed to me at first that by so doing I would arrive at a better result.

But on further reflection I had to change my mind and confess that the circulation of the stones four by four came to exactly the same thing as their circulation one by one. For if I was certain of finding each time, in the right pocket of my greatcoat, four stones totally different from their immediate predecessors, the possibility nevertheless remained of my always chancing on the same stone, within each group of four, and consequently of my sucking, not the sixteen turn and turn about as I wished, but in fact four only, always the same, turn and turn about. So I had to seek elswhere than in the mode of circulation. For no matter how I caused the stones to circulate, I always ran the same risk.

It was obvious that by increasing the number of my pockets I was bound to increase my chances of enjoying my stones in the way I planned, that is to say one after the other until their number was exhausted. Had I had eight pockets, for example, instead of the four I did have, then even the most diabolical hazard could not have prevented me from sucking at least eight of my sixteen stones, turn and turn about. The truth is I should have needed sixteen pockets in order to be quite easy in my mind. And for a long time I could see no other conclusion than this, that short of having sixteen pockets, each with its stone, I could never reach the goal I had set myself, short of an extraordinary hazard. And if at a pinch I could double the number of my pockets, were it only by dividing each pocket in two, with the help of a few safety-pins let us say, to quadruple them seemed to be more than I could manage. And I did not feel inclined to take all that trouble for a half-measure.

For I was beginning to lose all sense of measure, after all this wrestling and wrangling, and to say, All or nothing. And if I was tempted for an instant to establish a more equitable proportion between my stones and my pockets , by reducing the former to the number of the latter, it was only for an instant. For it would have been an admission of defeat. And sitting on the shore, before the sea, the sixteen stones spread out before my eyes, I gazed at them in anger and perplexity.  For just as I had difficulty in sitting in a chair, or in an arm-chair, because of my stiff leg, you understand, so I had none in sitting on the ground, because of my stiff leg and my stiffening leg, for it was about this time that my good leg, good in the sense that it was not stiff, began to stiffen.  I needed a prop under the ham you understand, and even under the whole length of the leg, the prop of the earth.  And while I gazed thus at my stones, revolving interminable martingales all equally defective, and crushing handfuls of sand, so that the sand ran through my fingers and fell back on the strand, yes, while thus I lulled my mind and part of my body, one day suddenly it dawned on me, dimly, that I might perhaps achieve my purpose without increasing the number of my pockets, or reducing the number of my stones, but simply by sacrificing the principle of trim.

The meaning of this illumination, which suddenly began to sing within me, like a verse of Isaiah, or of Jeremiah, I did not penetrate at once, and notably the word trim, which I had never met with, in this sense, long remained obscure. Finally I seemed to grasp that this word trim could not here mean anything else, anything better, than the distribution of the sixteen stones in four groups of four, one group in each pocket, and that it was my refusal to consider any distribution other than this that had vitiated my calculations until then and rendered the problem literally insoluble. And it was on the basis of this interpretation, whether right or wrong, that I finally reached a solution, inelegant assuredly, but sound, sound.

Now I am willing to believe, indeed I firmly believe, that other solutions to this problem might have been found and indeed may still be found, no less sound, but much more elegant than the one I shall now describe, if I can.  And I believe too that had I been a little more insistent, a little more resistant, I could have found them myself.  But I was tired, but I was tired, and I contented myself ingloriously with the first solution that was a solution, to this problem.  But not to go over the heartbreaking stages through which I passed before I came to it here it is, in all its hideousness.

All (all!) that was necessary was to put, for example, six stones in the right pocket of my greatcoat, or supply pocket, five in the right pocket of my trousers, and five in the left pocket of my trousers, that makes the lot, twice five ten plus six sixteen, and none, for none remained, in the left pocket of my greatcoat, which for the time being remained empty, empty of stones that is, for its usual contents remained, as well as occasional objects.  For where do you think I hid my vegetable knife, my silver, my horn and the other things that I have not yet named, perhaps shall never name.  Good. Now I can begin to suck. Watch me closely. I take a stone from the right pocket of my greatcoat , suck it, stop sucking it, put it in the left pocket of my greatcoat, the one empty (of stones). I take a second stone from the right pocket of my greatcoat, suck it put it in the left pocket of my greatcoat. And so on until the right pocket of my greatcoat is empty (apart from its usual and casual contents) and the six stones I have just sucked, one after the other, are all in the left pocket of my greatcoat.

Pausing then, and concentrating, so as not to make a balls of it, I transfer to the right pocket of my greatcoat, in which there are no stones left, the five stones in the right pocket of my trousers, which I replace by the five stones in the left pocket of my trousers, which I replace by the six stones in the left pocket of my greatcoat. At this stage then the left pocket of my greatcoat is again empty of stones, while the right pocket of my greatcoat is again supplied, and in the vright way, that is to say with other stones than those I have just sucked. These other stones I then begin to suck, one after the other, and to transfer as I go along to the left pocket of my greatcoat, being absolutely certain, as far as one can be in an affair of this kind, that I am not sucking the same stones as a moment before, but others.

And when the right pocket of my greatcoat is again empty (of stones), and the five I have just sucked are all without exception in the left pocket of my greatcoat, then I proceed to the same redistribution as a moment before, or a similar redistribution, that is to say I transfer to the right pocket of my greatcoat, now again available, the five stones in the right pocket of my trousers, which I replace by the six stones in the left pocket of my trousers, which I replace by the five stones in the left pocket of my greatcoat. And there I am ready to begin again. Do I have to go on? No, for it is clear that after the next series, of sucks and transfers, I shall be back where I started, that is with the first six stones back in the supply pocket, the next five in the right pocket of my stinking old trousers and finally the last five in left pocket of same, and my sixteen stones will have been sucked once at least in impeccable succession, not one sucked twice, not one left unsucked.

It is true that next time I could scarcely hope to suck my stones in the same order as the first time and that the first, seventh and twelfth for example of the first cycle might very well be the sixth, eleventh, and sixteenth respectively of the second, if the worst came to the worst.  But this was a drawback I could not avoid.  And if in the cycles taken together utter confusion was bound to reign, at least within each cycle taken separately I could be easy in my mind, at least as easy as one can be, in a proceeding of this kind.  For in order for each cycle to be identical, as to the succession of stones in my mouth, and God knows I had set my heart on it, the only means were numbered stones or sixteen pockets.  And rather than make twelve more pockets or number my stones, I preferred to make the best of the comparative peace of mind I enjoyed within each cycle taken separately.

For it was not enough to number the stones, but I would have had to remember, every time I put a stone in my mouth, the number I needed and look for it in my pocket.  Which would have put me off stone for ever, in a very short time.  For I would never have been sure of not making a mistake, unless of course I had kept a kind of register, in which to tick off the stones one by one, as I sucked them.  And of this I believed myself incapable.  No, the only perfect solution would have been the sixteen pockets, symmetrically disposed, each one with its stone.  Then I would have needed neither to number nor to think, but merely, as I sucked a given stone, to move on the fifteen others, a delicate business admittedly, but within my power, and to call always on the same pocket when I felt like a suck.  This would have freed me from all anxiety, not only within each cycle taken separately, but also for the sum of all cycles, though they went on forever.

But however imperfect my own solution was, I was pleased at having found it all alone, yes, quite pleased.  And if it was perhaps less sound than I had thought in the first flush of discovery, its inelegance never diminished.  And it was above all inelegant in this, to my mind, that the uneven distribution was painful to me, bodily.  It is true that a kind of equilibrium was reached, at a given moment, in the early stages of each cycle, namely after the third suck and before the fourth, but it did not last long, and the rest of the time I felt the weight of the stones dragging me now to one side, now to the other.  There was something more than a principle I abandoned, when I abandoned the equal distribution, it was a bodily need. But to suck the stones in the way I have described, not haphazard, but with method, was also I think a bodily need. Here then were two incompatible bodily needs, at loggerheads. Such things happen.

But deep down I didn’t give a tinker’s curse about being off my balance, dragged to the right hand and the left, backwards and forewards. And deep down it was all the same to me whether I sucked a different stone each time or always the same stone, until the end of time. For they all tasted exactly the same. And if I had collected sixteen, it was not in order to ballast myself in such and such a way, or to suck them turn about, but simply to have a little store, so as never to be without. But deep down I didn’t give a fiddler’s curse about being without, when they were all gone they would be all gone, I wouldn’t be any the worse off, or hardly any.  And the solution to which I rallied in the end was to throw away all the stones but one, which I kept now in one pocket, now in another, and which of course I soon lost, or threw away, or gave away, or swallowed.

This is the parable of the Homo scientificus that I wanted to present. I want to stress two particular things in it. One is the uncanny description of scientific intuition. This is exactly the way Einstein must have struggled to explain the failure of all experiments attempting to demonstrate a motion of the earth relative to the “light-medium,” until he very dimly realized that he had to abandon some “principle of trim,” the principle of absolute time, and this must have come in some such way as here described. There people have described intuition in cases where they were able to reconstruct a little of it. Jacques Hadamard, the French mathematician, has written a little book, An Essay on the psychology of Intuition in the MathematicaI Field, which is a collection of data on this phenomenon and describes how intuition wells up from completely unfathomable depths, first appears in a peculiar guise, and then suddenly breaks out with lightning clarity. Second, let us look at Molloy’s motivation. He certainly is not motivated by the goal of bettering our physical existence or desire for fame or acclaim. Does he do his work for its own sake, like art and music? He describes in detail how his little game “for its own sake” becomes an obsession beyond all measure of reason. This is not the way you and I do art or music, but it does resemble closely the way the creative artists and composers do it. You don’t have to look at Beethoven to become convinced of that. Look at any child of five who is obsessed with a creative problem and breaks out in anger and lustration at his failures.

This obsessive fixation picks on anything, quite oblivious of its meaningful content of “revealing the truth about nature” or “bettering our physical existence”. It is this quirk of our make-up, this sublimation of other psychic forces, that was delivered by evolution to cave man.

More was here delivered by evolution than had been ordered. It carried us from cave man to space man, and may well carry us to our destruction. And why not? The little earthquake we had the other day should have served all of us as a timely reminder, if any reminding is needed, that we are Dot here to stay, Dot as individuals, nor as families, nor as nations, nor as the human race, nor as a planet with life on it. There is uncertainty merely as to the time scale.

The point I wanted to make is this. Man is not only Homo faber, the tool maker. The grand edifice of Science, built through the centuries by the efforts of many people in many nations, gives you the illusion of an immense cathedral, erected in an orderly fashion according to some master plan. However, there never was a master plan. The edifice is a result of channeling our intellectual obsessive forces into the joint program In spite of this channeling, the progress of Science at all times has been and still is immensely disorderly for the very reason that there can be no master plan.

So, what could we do if we decided that innovative Science is too dangerous? I don’t know, but one thing is certain: it would take a lot of manipulation of man — political, economic, nutritional, genetic — if you tried to control Homo scientificus.

Discussion

Q: How can man with these characteristics resist considering implications? This doesn’t mean solving them — just considering them.

A: I understood the question to mean: if I make a discovery, should I consider the implications and maybe not publish it even if it’s a basic discovery. I think that it is impossible to foretell the implications. I couldn’t agree more that you should consider the implications, say, of the genetic manipulation of mankind. You can’t help It’s of the utmost importance. Same with “population zero”. I just don’t consider this as the same thing as doing science, this business of considering the implications. It’s something entirely different, as I explained in answer to Q 2.

Q: It seems to me that many human beings are subject to neurotic obsession. But it’s not clear how we choose problems. It seems to
me conceivable that one might choose a problem because somebody tells you that it’s an important problem for science and you can get upset about why the hell can’t I solve it even if you don’t care about the problem.

A: I agree. Science gives the impression of being a magnificent cathedral, an enormous structure — a well constructed thing, a cathedral built by the continuous effort of many generations through many centuries. Of course it isn’t a cathedral because it wasn’t planned. Nobody planned the scientific cathedral. To the student it looks as though it were planned. The student gets three volumes of Feynman lectures, I300 pages of a splendid textbook of ”Organic Chemistry”, and other textbooks, and says “Aha” I50 years ago today they got this far. In the meantime all this was constructed, and now I continue here. “My point is that science is not that at all. Science is primarily playing willfully, and getting obsessed with it, and it is not being told: ”Here, add your brick on page 1065 and do it properly or we won’t give you a PhD.” Such a student, if you ask him what he is doing, may possibly answer, “I am building a cathedral.” More likely, he will say, “I am laying bricks,” or even “I am making $4.50 an hour.”

Q: You didn’t say how much society should support science.

A: I didn’t answer it. No. I’m not interested.

Q: Should we not think about the support of science?

A: Oh, I don’t want to think about lt. No, it’s a very complicated thing. Obviously the high-energy physicists want ever bigger machines that cost a hundred million, billion, etc., and they say the military spend more and the military say if we stop making war the economy will break down. These are al! questions that are not very interesting. To me, anyway.

Q: Can you tell us how your illustrations came into being?

A: We had a party last week and at this party Dick Russell performed these acts while I was reading the story. He didn’t know the story, he just learned of it as it developed. Everybody had a drawing block
in front of them and sketched as Dick posed. The old trousers were Dick’s, the coat Vivian Hill’s. The prize winning artists were Felicia Hargreaves from our Art Center, and Vivian Hill. The first paper from which I quoted, on
Homo scientificus, some of you may be interested to know, was written by a graduate at Scripps College. She had married a scientist the year before she wrote the paper.

Q: Would you be willing to relax a little bit on your attitude with respect to question 1, namely the question whether science is beneficial? Would you say this depends on how you define beneficial?

A: Sure. If we measure it in terms of energy production or Infant mortality then it’s beneficial.

Q: Well, I think it’s very difficult lo say what is beneficial.

A: Yes. That’s why I put a “Doubtful” there. I didn’t answer “No”.

Q: Most of the problem with science is that we don’t even know what’s beneficial to society.

A: However, we can hardly evade the fact that the present state of the world leaves much lo be desired, and that this is largely a result of the efforts of people like Molloy.

Q: Then one might talk about whether the earlier stage of the world was an awful lot better.

A: Sure. Of course you can. You can. Please do. I don’t feel like arguing.

Q: Do you think it is common that scientists proceed in a way that IS neurotic? Don’t you think that occasionally they do something just because it’s interesting?

A: I didn’t mean to use the term neurotic in a derogatory way. Our culture is a product of our neuroses — I mean a product of the diversion of psychic forces from their original function into other directions.

Q: How could you do your research with such a pessimistic attitude? Did you have the same attitude when you started out?

A: I can’t answer that — how I was 40 years ago. If you call it pessimistic, I’m a very cheerful pessimist. I think there’s something to be said for the pessimist. It merely means not glossing over some basic facts.

Q: Your picture of a scientist is very personal, so your answer to the first question. “Is science beneficial I”, would be ”Yes, it’s beneficial to the doer.” Molloy’s pebbles were the same to him as special relativity was to Einstein and the hydrogen bomb to Edward Teller. The difference is that Molloy wasn’t going to hurt anybody. Now, if you say that science is beneficial to the scientist because he gets satisfaction from it, and the scientist isn’t thinking about the implications, does this imply that somebody else should think about the implications and say, “Molloy, you’re OK; Einstein, you’re doubtful; Teller, you’re out”? Who should make these decisions?

A: My point was that that’s quite impossible. Molloy and Einstein are identical. Einstein’s worrying about the Michelson-Morley experiment was just as esoteric as shuffling around the sucking stones. I mean that nothing could be more impersonal, impractical, more remote from any social implications than what Einstein did in I905.

To him, anyhow. Later on when the atomic arms race escalated one more round, and Einstein considered that he had been involved in their starting the atom bomb, he regretted that he had ever entered science, etc., but I don’t think he really had though about how deeply science is part of human nature. I think discoveries are all potentially equally harmful - like the circulating of the sucking stones. Maybe Molloy is discovering a principle of permutation or number theory — God only knows the implications of this. Didn’t the pictures look like some of the metal organic covalent bond shifting there? Didn’t Harry Gray get an inspiration from it for something that’s going to be utilized in some horrible contraption in a few years?

Q: Can you draw a distinction in terms of creativity between Einstein thinking up ideas and Edward Teller making bombs — one being playful and the other being purposeful?

A: I don’t have to make this distinction because, if I want to control the bad effects of science, I have to stop Einstein. Why should I try to make a distinction between him and Teller? Teller is an excellent scientist. Although I don’t know what he specifically did with the B-bomb, he certainly contributed a great deal to quantum mechanics and chemical physics. So then the question is, should the scientist stop publishing his science so that the bad appliers won’t misuse it? Rave a private club. I had a slide of that which got lost. I found it at MIT. A poster with a quotation from Einstein saying how sorry he was that he had ever, etc., and that if he could start life again he would just become a lighthouse keeper or something like that. Underneath on this poster there was an invitation from somebody saying: “Will you join us in a commune of scientists who will talk among ourselves and not publish anything — just do it by ourselves?” And somebody had scrawled on the side: “Commie”. The idea of doing science in a commune and not publishing it seems absurd to me. Why should we get together to follow these pursuits which are not really pleasurable? Molloy had a certain relief and was satisfied that he had found a solution, but the main thing for him was that he was easy in his mind. As easy as one can be in a matter of this kind; suck them turn and turn about. I mean, he had to relieve the uneasiness of his mind. That’s where the neurosis comes in - - the obsession.

Q: I’ve been uneasy without being able to articulate it very well, because it seems to me that you say something about the personal obsessions of scientists and the irrelevance of the goal or consideration of a moral principle in their work, and I think it’s probably only a half-truth. Einstein was a deeply moral man, very concerned. I have a feeling that scientists in their work are buoyed and reinforced by the belief that the answer to question I a “Yes”.

A: Yes, of course you can be buoyed by the feeling you’ve done society good; you can be buoyed by the fooling that you’re acquiring fame and prizes. My point is this: prior to these reinforcements, and more fundamental, even the lonely, decrepit beachcomber cannot avoid being a scientist, in an obsessive way (exactly the way Einstein was), although both the accessory components are missing. As for Einstein as a young patent clerk in Berne, in I905, I doubt that! He then made a connection between his physics and his responsibilities to society. That’s the point I wanted to mate. Thank you for making me point it out again. I mean these other components are there, of course, and if you read Jim Watson’s book The Double Helix, you might think that getting a Nobel Prize is everything. However, this would be a grievous misconception.

Q: How many scientists on a desert island would do science for their own benefit?

A: Even Molloy would. But not for their benefit. He doesn’t do it for his benefit. He does it compulsively. I think we all do. No, I take it back. Maybe not. It’s a difficult question to answer because most of us are so dulled in our sensitivities that we may be quite incapable of any such complicated argument or reasoning or have the amount of relaxation that this man had Of course, he had to be able to sit there for hours on the beach and dream up interminable martingales. U you put people on a desert island probably quite a few of them would dream up interminable martingales and be satisfied with finding something that works.

Q: I wonder if the one place where this parallel between Molloy and other scientists doesn’t hold is that Molloy doesn’t seem to have any intentions of communicating his results to anyone else, so I would ask you, do you think. Einstein would have done his work if he had had no intention of publishing the results? And a personal question: Would you have done science if you had thought no one would be interested in the results?

A: No, certainly not. In this first essay, from which I quoted, by the Scripps girl, it said that they are playing animals. Scientists are playing animals- They not only play alone but they also play together, and if they are not too morose, they actually prefer to play together. And most scientists do prefer to play together. And in the case of Einstein of course, he would never have heard of Michelson and Morley if he had not been in communication. No, a great joy of the business is communication. All I wanted to point out is the obsessive component of the immediate act of doing science. The channeling of this component toward the erection of a large structure, the institutionalization of it, that is a creation by society, and that is something different. That is not a primary characteristic of Homo scientificus. 

Link: On Testicles

Soccer fans call it brave goalkeeping, the act of springing into a star shape in front of an attacker who is about to kick the ball as hard as possible toward the goal. As I shuffled from the field, bent forward, eyes watering, waiting for the excruciating whack of pain in my crotch to metamorphose into a gut-wrenching ache, I thought only stupid goalkeeping. But after the fourth customary slap on the back from a teammate chortling, “Hope you never wanted kids, pal,” I thought only stupid, stupid testicles.

Natural selection has sculpted the mammalian forelimb into horses’ front legs, dolphins’ fins, bats’ wings, and my soccer ball-catching hands. Why, on the path from the primordial soup to us curious hairless apes, did evolution house the essential male reproductive organs in an exposed sac? It’s like a bank deciding against a vault and keeping its money in a tent on the sidewalk.

Some of you may be thinking that there is a simple answer: temperature. This arrangement evolved to keep them cool. I thought so, too, and assumed that a quick glimpse at the scientific literature would reveal the biological reasons and I’d move on. But what I found was that the small band of scientists who have dedicated their professional time to pondering the scrotum’s existence are starkly divided over this so-called cooling hypothesis.

Reams of data show that scrotal sperm factories, including our own, work best a few degrees below core body temperature. The problem is, this doesn’t prove cooling was the reason that testicles originally descended. It’s a straight-up chicken-and-egg situation—did testicles leave the kitchen because they couldn’t stand the heat, or do they work best in the cold because they had to leave the body?

Vital organs that work optimally at 98.5 degrees Fahrenheit get bony protection: My brain and liver are shielded by skull and ribs, and my girlfriend’s ovaries are defended by her pelvis. Forgoing skeletal protection is dangerous. Each year, thousands of men go to the hospital with ruptured testes or torsions caused by having this essential organ suspended chandelierlike on a flexible twine of tubes and cords. But having exposed testicles as an adult is not even the most dangerous aspect of our reproductive organs’ arrangement.

The developmental journey to the scrotum is treacherous. At eight weeks of development, a human fetus has two unisex structures that will become either testicles or ovaries. In girls, they don’t stray far from this starting point up by the kidneys. But in boys, the nascent gonads make a seven-week voyage across the abdomen on a pulley system of muscles and ligaments. They then sit for a few weeks before coordinated waves of muscular contractions force them out through the inguinal canal.

The complexity of this journey means that it frequently goes wrong. About 3 percent of male infants are born with undescended testicles, and although often this eventually self-corrects, it persists in 1 percent of 1-year-old boys and typically leads to infertility.

Excavating the inguinal canal also introduces a significant weakness in the abdominal wall, a passage through which internal organs can slip. In the United States, more than 600,000 surgeries are performed annually to repair inguinal hernias—the vast majority of them in men.

This increased risk of hernias and sterilizing mishaps seems hardly in keeping with the idea of evolution as survival of the fittest. Natural selection’s tagline reflects the importance of attributes that help keep creatures alive—not dying being an essential part of evolutionary success. How can a trait such as scrotality (to use the scientific term for possessing a scrotum), with all the obvious handicaps it confers, fit into this framework? Its story is certainly going to be less straightforward than the evolution of a cheetah’s leg muscles. Most investigators have tended to think that the advantages of this curious anatomical arrangement must come in the shape of improved fertility. But this is far from proven.

When considering any evolved characteristic, good first questions are who has it and who had it first. In birds, reptiles, fish, and amphibians, male gonads are internal. The scrotum is a curiosity unique to mammals. A recent testicle’s-eye view of the mammalian family tree revealed that the monumental descent occurred pretty early in mammalian evolution. And what’s more, the scrotum was so important that it evolved twice.

The first mammals lived about 220 million years ago. The most primitive living mammals are the duck-billed platypus and its ilk—creatures with key mammalian features such as warm blood, fur, and lactation (the platypus kind of sweats milk rather than having tidy nipples), although they still lay eggs like the ancestors they share with reptiles. Platypus testicles, and almost certainly those of all early mammals, sit right where they start life, safely tucked by the kidneys.

About 70 million years later, marsupials evolved, and it is on this branch of the family tree that we find the first owner of a scrotum. Nearly all marsupials today have scrotums, and so logically the common ancestor of kangaroos, koalas, and Tasmanian devils had the first. Marsupials evolved their scrotum independently from us placental mammals, which is known thanks to a host of technical reasons, the most convincing of which is that it’s back-to-front. Marsupials’ testicles hang in front of their penises.

Fifty million years after the marsupial split is the major fork in the mammalian tree, scrotally speaking. Take a left and you will encounter elephants, mammoths, aardvarks, manatees, and groups of African shrew- and mole-like creatures. But you will never see a scrotum—all of these placental animals, like platypuses, retain their gonads close to their kidneys.

However, take a right, to the human side of the tree, at this 100 million-year-old juncture and you’ll find descended testicles everywhere. Whatever they’re for, scrotums bounce along between the hind limbs of cats, dogs, horses, bears, camels, sheep, and pigs. And, of course, we and all our primate brethren have them. This means that at the base of this branch is the second mammal to independently concoct scrotality—the one to whom we owe thanks for our dangling parts being, surely correctly, behind the penis.

Between these branches, however, is where it gets interesting, for there are numerous groups, our descended but ascrotal cousins, whose testes drop down away from the kidneys but don’t exit the abdomen. Almost certainly, these animals evolved from ancestors whose testes were external, which means at some point they backtracked on scrotality, evolving anew gonads inside the abdomen. They are a ragtag bunch including hedgehogs, moles, rhinos and tapirs, hippopotamuses, dolphins and whales, some seals and walruses, and scaly anteaters.

For mammals that returned to the water, tucking everything back up inside seems only sensible; a dangling scrotum isn’t hydrodynamic and would be an easy snack for fish attacking from below. I say snack, but the world record-holders, right whales, have testicles that tip the scales at more than 1,000 pounds apiece. The trickier question, which may well be essential for understanding its function, is why did the scrotal sac lose its magic for terrestrial hedgehogs, rhinos, and scaly anteaters?

The scientific search to explain the scrotum’s raison d’être began in England in the 1890s at Cambridge University. Joseph Griffiths, using terriers as his unfortunate subjects, pushed their testicles back into their abdomens and sutured them there. As little as a week later, he found that the testes had degenerated, the tubules where sperm production occurs had constricted, and sperm were virtually absent. He put this down to the higher temperature of the abdomen, and the cooling hypothesis was born.

In the 1920s, a time when Darwin’s ideas were rapidly spreading, Carl Moore at the University of Chicago argued that after mammals had transitioned from cold- to warm-blooded, keeping the body in the mid-to-high 90 degrees must have severely hampered sperm production, and the first males to cool things off with a scrotum became the more successful breeders.

Heat disrupts sperm production so effectively that biology textbooks and medical tracts alike give cooling as the reason for the scrotum. The problem is many biologists who seriously think about animal evolution are unhappy with this. Opponents say that testicles function optimally at cooler temperatures because they evolved this trait after their exile.

If mammals became warm-blooded 220 million or so years ago, it would mean mammals carried their gonads internally for more than 100 million years before the scrotum made its bow. The two events were hardly tightly coupled.

The hypothesis’ biggest problem, though, is all the sacless branches on the family tree. Regardless of their testicular arrangements, all mammals have elevated core temperatures. If numerous mammals lack a scrotum, there is nothing fundamentally incompatible with making sperm at high temperatures. Elephants have a higher core temperature than gorillas and most marsupials. And beyond mammals it gets worse: Birds, the only other warm-blooded animals, have internal testes despite having core temperatures that in some species run to 108 degrees.

Any argument for why cooling would be better for sperm has to say exactly why. The idea that a little less heat might keep sperm DNA from mutating has been proposed, and recently it’s been suggested that keeping sperm cool may allow the warmth of a vagina to act as an extra activating signal. But these ideas still fail to surmount the main objections to the cooling hypothesis.

Michael Bedford of Cornell Medical College is no fan of the cooling hypothesis applied to testicles, but he does wonder whether having a cooled epididymis, the tube where sperm sit after leaving their testicular birthplace, might be important. (Sperm are impotent on exiting the testes and need a few final modifications while in the epididymis.) Bedford has noted that some animals with abdominal testes have extended their epididymis to just below the skin, and that some furry scrotums have a bald patch for heat loss directly above this storage tube. But if having a cool epididymis is the main goal, why throw the testicles out with it?

Link: The New Dark Ages, Part I: From Religion to Ethnic Nationalism and Back Again

European Historians have long eschewed the term “Dark Ages.” Few of them still use it, and many of them shiver when they encounter it in popular culture. Scholars rightly point out that the term, popularly understood as connoting a time of death, ignorance, stasis, and low quality of life, is prejudiced and misleading.

And so my apologies to them as I drag this troublesome phrase to center stage yet again, offering a new variation on its meaning.

In this essay I am taking the liberty of modifying the tem “Dark Ages” and applying to a modern as well as a historical context. I use it to refer to a general culture of fundamentalism permeating societies, old and new. By “Dark Age” I mean to describe any large scale effort to dim human understanding by submerging it under a blanket of fundamentalist dogma. And far from Europe of 1,500 years ago, my main purpose is to talk about far more recent matters around the world.

Life is, of course, a multi-faceted affair. The complex relationships among individuals and between individuals and societies produce a host of economic, cultural, political, and social manifestations. But one of the defining characteristics of the European Dark Ages, as I am now using the term, was the degree to which those multi-faceted aspects of the world were flattened by religious theology and dogma. As the Catholic Church grew in power and spread across Europe from roughly 500-1500, it was able, at least to some degree, to sublimate political, cultural, social, and economic understanding and action under its dogmatic authority. In many realms of life far beyond religion, forms of knowledge and action were subject to theological sanction.

Those who take pride in Western civilization, or even those like myself who don’t necessarily, but who simply acknowledge its various achievements alongside its various shortcomings, recognize a series of factors that led to those achievements. Some of those factors, such as colonialism, are horrific. Some, like the growth of secular thought, are more admirable.

Not that secular thought in and of itself is intrinsically laudable; maybe it is, though I don’t think so. But rather, that the rise of secular thought enabled Europe, over the course of centuries, to throw off it’s own self-imposed yoke of religious absolutism. And that freeing itself in this way was one of the factors spurring Europe’s many impressive achievements over the last half-millennium.

Most denizens of what was once known as the Christian world, including various colonial offshoots such as the United States and Australia, now accept and even take for granted a multi-faceted conception of life and human interaction. For most of them, including many of the religious ones, it is a given that moving away from a world view flattened by religion, at the very least, facilitated the development of things like science and the modern explosion of wealth. Of course the move from a medieval to a modern mind set also unleashed a variety of problems; but on balance, relatively few Westerners would willingly return to any version of medieval Christian theocracy.1

This confidence in a modern vision of human life and society, which acknowledges that religion, like science, politics, economics, culture and countless other facets, each have a role to play and that none should squeeze out the rest, can lead Westerners to look down their noses at those societies which are currently flattened by religion, or struggling to avoid it. Too many Westerners, either with sneers or pity, look askance at other parts of the world where such battles are currently being waged.

Fundamentalist Muslims in a number of countries are literally fighting to assert a theocratic vision over hundreds of millions of people. And though much smaller numerically and not plagued by civil war, Israel likewise suffers from a deep divide between ultra-Orthodox Jews who want religion to dominate most if not all aspects of Israeli life, and those Jews, both religious and not, who embrace a more secular vision for their state in which those divisions will continue to be respected.

When contrasting the West to places mired in such struggle, it becomes oh, so easy for those of us in the United States, Europe, and other parts of the former Christian world to smugly assert that we moved beyond such theocratic perils some time ago and we simply shan’t be returning. It is tempting for some to see history as an irregular but fairly steady linear advancement, progressing forward. This allows people to frame the secular West as winning some kind of race and as superior to, say, the Middle East, which many suppose is “still” struggling to achieve secularism.

But to think that the West has permanently moved past such Dark Ages, never again to return, is just as big a mistake as failing to realize that some of the societies now struggling to avoid a religious Dark Age have in fact been very secular in the recent past.

Such assumptions are not only mistaken but dangerous. The reality is that there are no guarantees about history except that it is dynamic. Things always change. And change does not occur in some neat, linear pattern, which is precisely why you cannot predict historical change.

Link: Crimes Against Humanities

Leon Wieseltier responds to Steven Pinker’s on scientism.

The question of the place of science in knowledge, and in society, and in life, is not a scientific question. Science confers no special authority, it confers no authority at all, for the attempt to answer a nonscientific question. It is not for science to say whether science belongs in morality and politics and art. Those are philosophical matters, and science is not philosophy, even if philosophy has since its beginnings been receptive to science. Nor does science confer any license to extend its categories and its methods beyond its own realms, whose contours are of course a matter of debate. The credibility of physicists and biologists and economists on the subject of the meaning of life—what used to be called the ultimate verities, secularly or religiously constructed—cannot be owed to their work in physics and biology and economics, however distinguished it is. The extrapolation of larger ideas about life from the procedures and the conclusions of various sciences is quite common, but it is not in itself justified; and its justification cannot be made on internally scientific grounds, at least if the intellectual situation is not to be rigged. Science does come with a worldview, but there remains the question of whether it can suffice for the entirety of a human worldview. To have a worldview, Musil once remarked, you must have a view of the world. That is, of the whole of the world. But the reach of the scientific standpoint may not be as considerable or as comprehensive as some of its defenders maintain.

None of these strictures about the limitations of science, about its position in nonscientific or extra-scientific contexts, in any way impugns the integrity or the legitimacy or the necessity or the beauty of science. Science is a regular source of awe and betterment. No humanist in his right mind would believe otherwise. No humanist in his right mind would believe otherwise. Science is plainly owed this much support, this much reverence. This much—but no more. In recent years, however, this much has been too little for certain scientists and certain scientizers, or propagandists for science as a sufficient approach to the natural universe and the human universe. In a world increasingly organized around the dazzling new breakthroughs in science and technology, they feel oddly besieged.

They claim that science is under attack, and from two sides. The first is the fundamentalist strain of Christianity, which does indeed deny the truth of certain proven scientific findings and more generally prefers the subjective gains of personal rapture to the objective gains of scientific method. Against this line of attack, even those who are skeptical about the scientizing enterprise must stand with the scientists, though it is important to point out that the errors of religious fundamentalism must not be mistaken for the errors of religion. Too many of the defenders of science, and the noisy “new atheists,” shabbily believe that they can refute religion by pointing to its more outlandish manifestations. Only a small minority of believers in any of the scriptural religions, for example, have ever taken scripture literally. When they read, most believers, like most nonbelievers, interpret. When the Bible declares that the world was created in seven days, it broaches the question of what a day might mean. When the Bible declares that God has an arm and a nose, it broaches the question of what an arm and a nose might mean. Since the universe is 13.8 billion years old, a day cannot mean 24 hours, at least not for the intellectually serious believer; and if God exists, which is for philosophy to determine, this arm and this nose cannot refer to God, because that would be stupid.

Interpretation is what ensues when a literal meaning conflicts with what is known to be true from other sources of knowledge. As the ancient rabbis taught, accept the truth from whoever utters it. Religious people, or many of them, are not idiots. They have always availed themselves of many sources of knowledge. They know about philosophical argument and figurative language. Medieval and modern religious thinking often relied upon the science of its day. Rationalist currents flourished alongside anti-rationalist currents, and sometimes became the theological norm. What was Jewish and Christian and Muslim theology without Aristotle? When a dissonance was experienced, the dissonance was honestly explored. So science must be defended against nonsense, but not every disagreement with science, or with the scientific worldview, is nonsense. The alternative to obscurantism is not that science be all there is.

The second line of attack to which the scientizers claim to have fallen victim comes from the humanities. This is a little startling, since it is the humanities that are declining in America, not least as a result of the exaggerated glamour of science. But some scientists and some scientizers feel prickly and self-pitying about the humanistic insistence that there is more to the world than science can disclose. It is not enough for them that the humanities recognize and respect the sciences; they need the humanities to submit to the sciences, and be subsumed by them. The idea of the autonomy of the humanities, the notion that thought, action, experience, and art exceed the confines of scientific understanding, fills them with a profound anxiety. It throws their totalizing mentality into crisis. And so they respond with a strange mixture of defensiveness and aggression. As people used to say about the Soviet Union, they expand because they feel encircled.

A few weeks ago this magazine published a small masterpiece of scientizing apologetics by Steven Pinker, called “Science Is Not Your Enemy.” Pinker utters all kinds of sentimental declarations about the humanities, which “are indispensable to a civilized democracy.” Nobody wants to set himself against sensibility, which is anyway a feature of scientific work, too. Pinker ranges over a wide variety of thinkers and disciplines, scientific and humanistic, and he gives the impression of being a tolerant and cultivated man, which no doubt he is. But the diversity of his analysis stays at the surface. His interest in many things is finally an interest in one thing. He is a foxy hedgehog. His essay, a defense of “scientism,” is a long exercise in assimilating humanistic inquiries into scientific ones. By the time Pinker is finished, the humanities are the handmaiden of the sciences, and dependent upon the sciences for their advance and even their survival.

Pinker tiresomely rehearses the familiar triumphalism of science over religion: “the findings of science entail that the belief systems of all the world’s traditional religions and cultures … are factually mistaken.” So they are, there on the page; but most of the belief systems of all the world’s traditional religions and cultures have evolved in their factual understandings by means of intellectually responsible exegesis that takes the progress of science into account; and most of the belief systems of all the world’s traditional religions and cultures are not primarily traditions of fact but traditions of value; and the relationship of fact to value in those traditions is complicated enough to enable the values often to survive the facts, as they do also in Aeschylus and Plato and Ovid and Dante and Montaigne and Shakespeare. Is the beauty of ancient art nullified by the falsity of the cosmological ideas that inspired it? I would sooner bless the falsity for the beauty. Factual obsolescence is not philosophical or moral or cultural or spiritual obsolescence. Like many sophisticated people, Pinker is quite content with a collapse of sophistication in the discussion of religion.

Yet the purpose of Pinker’s essay is not chiefly to denounce religion. It is to praise scientism. Rejecting the various definitions of scientism—“it is not an imperialistic drive to occupy the humanities,” it is not “reductionism,” it is not “naïve”—Pinker proposes his own characterization of scientism, which he defends as an attempt “to export to the rest of intellectual life” the two ideals that in his view are the hallmarks of science. The first of those ideals is that “the world is intelligible.” The second of those ideals is that “the acquisition of knowledge is hard.” Intelligibility and difficulty, the exclusive teachings of science? This is either ignorant or tendentious. Plato believed in the intelligibility of the world, and so did Dante, and so did Maimonides and Aquinas and Al-Farabi, and so did Poussin and Bach and Goethe and Austen and Tolstoy and Proust. They all share Pinker’s denial of the opacity of the world, of its impermeability to the mind. They all join in his desire to “explain a complex happening in terms of deeper principles.” They all concur with him that “in making sense of our world, there should be few occasions in which we are forced to concede ‘It just is’ or ‘It’s magic’ or ‘Because I said so.’ ” But of course Pinker is not referring to their ideals of intelligibility. The ideal that he has in mind is a very particular one. It is the ideal of scientific intelligibility, which he disguises, by means of an inoffensive general formulation, as the whole of intelligibility itself.

If Pinker believes that scientific clarity is the only clarity there is, he should make the argument for such a belief. He should also acknowledge its narrowness (though within the realm of science it is very wide), and its straitening effect upon the investigation of human affairs. Instead he simply conflates scientific knowledge with knowledge as such. In his view, anybody who has studied any phenomena that are studied by science has been a scientist. It does not matter that they approached the phenomena with different methods and different vocabularies. If they were interested in the mind, then they were early versions of brain scientists. If they investigated human nature, then they were social psychologists or behavioral economists avant la lettre. Pinker’s essay opens with the absurd, but immensely revealing, contention that Spinoza, Locke, Hume, Rousseau, Kant, and Smith were scientists. It is true that once upon a time a self-respecting intellectual had to be scientifically literate, or even attempt a modest contribution to the study of the natural world. It is also true that Kant, to choose but one of Pinker’s heroes of science, made some astronomical discoveries in his early work; but Kant’s significant contributions to our understanding of mind and morality were plainly philosophical, and philosophy is not, and was certainly not for Kant, a science. Perhaps one can be a scientist without being aware that one is a scientist. What else could these thinkers have been, for Pinker? If they contributed to knowledge, then they must have been scientists, because what other type of knowledge is there? For all its geniality, Pinker’s translation of nonscientific thinking into science is no less strident a constriction than, say, Carnap’s colossally parochial dictum that “there is no question whose answer is in principle unattainable by science.” His ravenous intellectual appetite notwithstanding, Pinker is finally in the same reductionist racket. (The R-word!) He sees many locks but only one key.

The translation of nonscientific discourse into scientific discourse is the central objective of scientism. It is also the source of its intellectual perfunctoriness. Imagine a scientific explanation of a painting—a breakdown of Chardin’s cherries into the pigments that comprise them, and a chemical analysis of how their admixtures produce the subtle and plangent tonalities for which they are celebrated. Such an analysis will explain everything except what most needs explaining: the quality of beauty that is the reason for our contemplation of the painting. Nor can the new “vision science” that Pinker champions give a satisfactory account of aesthetic charisma. The inadequacy of a scientistic explanation does not mean that beauty is therefore a “mystery” or anything similarly occult. It means only that other explanations must be sought, in formal and iconographical and emotional and philosophical terms.