Sunshine Recorder

Link: The Reality Show

Schizophrenics used to see demons and spirits. Now they talk about actors and hidden cameras – and make a lot of sense

Clinical psychiatry papers rarely make much of a splash in the wider media, but it seems appropriate that a paper entitled ‘The Truman Show Delusion: Psychosis in the Global Village’, published in the May 2012 issue of Cognitive Neuropsychiatry, should have caused a global sensation. Its authors, the brothers Joel and Ian Gold, presented a striking series of cases in which individuals had become convinced that they were secretly being filmed for a reality TV show.

In one case, the subject travelled to New York, demanding to see the ‘director’ of the film of his life, and wishing to check whether the World Trade Centre had been destroyed in reality or merely in the movie that was being assembled for his benefit. In another, a journalist who had been hospitalised during a manic episode became convinced that the medical scenario was fake and that he would be awarded a prize for covering the story once the truth was revealed. Another subject was actually working on a reality TV series but came to believe that his fellow crew members were secretly filming him, and was constantly expecting the This-Is-Your-Life moment when the cameras would flip and reveal that he was the true star of the show.

Few commentators were able to resist the idea that these cases — all diagnosed with schizophrenia or bipolar disorder, and treated with antipsychotic medication — were in some sense the tip of the iceberg, exposing a pathology in our culture as a whole. They were taken as extreme examples of a wider modern malaise: an obsession with celebrity turning us all into narcissistic stars of our own lives, or a media-saturated culture warping our sense of reality and blurring the line between fact and fiction. They seemed to capture the zeitgeist perfectly: cautionary tales for an age in which our experience of reality is manicured and customised in subtle and insidious ways, and everything from our junk mail to our online searches discreetly encourages us in the assumption that we are the centre of the universe.

But part of the reason that the Truman Show delusion seems so uncannily in tune with the times is that Hollywood blockbusters now regularly present narratives that, until recently, were confined to psychiatrists’ case notes and the clinical literature on paranoid psychosis. Popular culture hums with stories about technology that secretly observes and controls our thoughts, or in which reality is simulated with virtual constructs or implanted memories, and where the truth can be glimpsed only in distorted dream sequences or chance moments when the mask slips. A couple of decades ago, such beliefs would mark out fictional characters as crazy, more often than not homicidal maniacs. Today, they are more likely to identify a protagonist who, like Jim Carrey’s Truman Burbank, genuinely has stumbled onto a carefully orchestrated secret of which those around him are blandly unaware. These stories obviously resonate with our technology-saturated modernity. What’s less clear is why they so readily adopt a perspective that was, until recently, a hallmark of radical estrangement from reality. Does this suggest that media technologies are making us all paranoid? Or that paranoid delusions suddenly make more sense than they used to?

The first person to examine the curiously symbiotic relationship between new technologies and the symptoms of psychosis was Victor Tausk, an early disciple of Sigmund Freud. In 1919, he published a paper on a phenomenon he called ‘the influencing machine’. Tausk had noticed that it was common for patients with the recently coined diagnosis of schizophrenia to be convinced that their minds and bodies were being controlled by advanced technologies invisible to everyone but them. These ‘influencing machines’ were often elaborately conceived and predicated on the new devices that were transforming modern life. Patients reported that they were receiving messages transmitted by hidden batteries, coils and electrical apparatus; voices in their heads were relayed by advanced forms of telephone or phonograph, and visual hallucinations by the covert operation of ‘a magic lantern or cinematograph’. Tausk’s most detailed case study was of a patient named ‘Natalija A’, who believed that her thoughts were being controlled and her body manipulated by an electrical apparatus secretly operated by doctors in Berlin. The device was shaped like her own body, its stomach a velvet-lined lid that could be opened to reveal batteries corresponding to her internal organs.

Although these beliefs were wildly delusional, Tausk detected a method in their madness: a reflection of the dreams and nightmares of a rapidly evolving world. Electric dynamos were flooding Europe’s cities with power and light, their branching networks echoing the filigree structures seen in laboratory slides of the human nervous system. New discoveries such as X-rays and radio were exposing hitherto invisible worlds and mysterious powers that were daily discussed in popular science journals, extrapolated in pulp fiction magazines and claimed by spiritualists as evidence for the ‘other side’. But all this novelty was not, in Tausk’s view, creating new forms of mental illness. Rather, modern developments were providing his patients with a new language to describe their condition.

At the core of schizophrenia, he argued, was a ‘loss of ego-boundaries’ that made it impossible for subjects to impose their will on reality, or to form a coherent idea of the self. Without a will of their own, it seemed to them that the thoughts and words of others were being forced into their heads and issued from their mouths, and their bodies were manipulated like puppets, subjected to tortures or arranged in mysterious postures. These experiences made no rational sense, but those who suffered them were nevertheless subject to what Tausk called ‘the need for causality that is inherent in man’. They felt themselves at the mercy of malign external forces, and their unconscious minds fashioned an explanation from the material to hand, often with striking ingenuity. Unable to impose meaning on the world, they became empty vessels for the cultural artefacts and assumptions that swirled around them. By the early 20th century, many found themselves gripped by the conviction that some hidden operator was tormenting them with advanced technology.

Tausk’s theory was radical in its implication that the utterances of psychosis were not random gibberish but a bricolage, often artfully constructed, of collective beliefs and preoccupations. Throughout history up to this point, the explanatory frame for such experiences had been essentially religious: they were seen as possession by evil spirits, divine visitations, witchcraft, or snares of the devil. In the modern age, these beliefs remained common, but alternative explanations were now available. The hallucinations experienced by psychotic patients, Tausk observed, are not typically three-dimensional objects but projections ‘seen on a single plane, on walls or windowpanes’. The new technology of cinema replicated this sensation precisely and was in many respects a rational explanation of it: one that ‘does not reveal any error of judgment beyond the fact of its non-existence’.

In their instinctive grasp of technology’s implicit powers and threats, influencing machines can be convincingly futuristic and even astonishingly prescient. The very first recorded case, from 1810, was a Bedlam inmate named James Tilly Matthews who drew exquisite technical drawings of the machine that was controlling his mind. The ‘Air Loom’, as he called it, used the advanced science of his day — artificial gases and mesmeric rays — to direct invisible currents into his brain, where a magnet had been implanted to receive them. Matthews’s world of electrically charged beams and currents, sheer lunacy to his contemporaries, is now part of our cultural furniture. A quick internet search reveals dozens of online communities devoted to discussing magnetic brain implants, both real and imagined.

Link: Everyday Insanity: Psychosis and the Mundane

Psychosis is usually considered as the least mundane of mental states. Occasionally however, the mundane and the psychotic collide, producing uncanny and jarring contrasts that highlight the unreality of everyday life.

Delusions and hallucinations, by definition, are atypical to the humdrum of everyday mental life. The most spectacular episodes are clinically described as ‘florid’ (literally ‘covered with flowers’), a phrase meant to capture the attention-grabbing, chaotic and sometimes colourful nature of the thoughts and behaviour of someone experiencing a fundamentally different reality from the rest of the work-a-day world.

In the experience of psychosis, the spectacular becomes commonplace and the whole notion of the mundane is thrown into turmoil. The mundane occasionally appears in psychotic episodes, but instantly becomes notable because of its rarity. In psychosis, the mundane exists in a state of duality where it is no longer wholly banal or conspicuous and yet maintains the qualities of each.

Passivity phenomena, where people experience inserted thoughts and imposed actions from an imagined external source, are often particularly spectacular and can be embedded in complex delusional systems. A nineteenth century account in the memoirs of German judge Daniel Paul Schreber described how his highly excited mind was attracting ‘rays’ from God causing him feminising sensations of ‘voluptuousness’, this he believed, would eventually lead to a state of heavenly bliss which would unite him and the Almighty in holy union.

Psychoanalyst Victor Tausk noted the phenomenon of the ‘influencing machine’ in psychosis[1], where people believe themselves to be under the influence of highly advanced mind control technology, usually operated by secretive conspiratorial gangs attempting to control the object of the supposed persecution.

Into this psychotic world of mind control and divine influence, steps the tweed-suited host of This is Your Life[2]:

A 29 year old housewife said, “I look out of the window and I think the garden looks nice and the grass looks cool, but the thoughts of Eamonn Andrews come into my mind. There are no other thoughts there, only his. He treats my mind like a screen and flashes his thoughts onto it like you flash a picture.”

This brief case study catches a glimpse of media banality intruding into a world usually reserved for supernatural powers and sci-fi technology. The appearance of media personalities in psychosis is not in itself unusual, except appearances are usually restricted to the bold and the beautiful of the media world. Numerous “rock and roll” delusions have been reported involving famous singers and rock musicians[3], with the flamboyant David Bowie seeming a popular guest in people’s rock and roll psychoses (or at least in the ones reported in the literature).

In contrast, inoffensive TV presenters stick out like a sore thumb. In these alternate realities, the amiable Eamonn Andrews is strikingly unfamiliar, flying the flag for the mundane in an otherwise kaleidoscopic world.

But perhaps even here, the kitsch of front-room family television might be too notable to be considered truly mundane. If so, a further case study[4] pushes us firmly into the prosaic:

Margaret and her husband Michael, both aged 34 years, were discovered to be suffering from folie à deux when they were both found to be sharing similar persecutory delusions. They believed that certain persons were entering their house, spreading dust and fluff and “wearing down their shoes”.

The domestic nature of the delusions are thrown into stark relief by the context of the rare and unusual syndrome (folie à deux) in which they occur, highlighting them as the ordinary out-of-place. As outside observers, the logical implosion starts here. The ordinary becomes remarkable, this makes it no longer ordinary, which means it is remarkable in the remarkable world of psychosis, meaning it is ordinary once more (and so on).

Here, the appearance of the mundane in the world of psychosis causes a paradox in the rational world, making it a little more ambiguous and difficult to understand; while the mundane leaks into psychosis, psychosis is leaking back into the world of the mundane.

Part of the curiosity of this situation lies in that psychosis, once thought of as a cut and dry categorical diagnosis, is now increasingly believed to exist as a continuum, with some people simply experiencing more anomalous experience than others. The people who have the most intense unusual experiences tend to become distressed or behave oddly enough to come to the attention of psychiatrists, whereas others may be able to continue with daily living and never be considered of medical interest.

The figures are striking, up to 40% of the general population have been found to have experienced a daytime hallucination[5] and 10% of the general population score above the average of psychotic inpatients on measures of delusional thinking[6]. It seems ripples in consensual reality are being recognised as common and we are forced to question what counts as mundane reality.

The philosopher Charles Fort perhaps had the best grip on the situation. He spent his life cataloguing the seemingly-fantastical phenomena that science had “damned” as unworthy of explanation[7]. By amassing such a catalogue of strange events, Fort demonstrated that the anomalous can be commonplace. Strange phenomena are not considered strange because they necessarily happen infrequently, but because we do not have an easy explanation for them.

According to Fort, the mundane consists of those things that can be explained away without a second thought, they go unmarked and unnoticed because they do not challenge our preconceptions. When the mundane appears in psychosis, the contrast can be jarring, challenging our ideas of the everyday and the unusual.

From this perspective, anything can be lifted from the world of the mundane simply by examining it closely and intently, or damned to the everyday world by overlooking its importance.

Link: A Proposal to Classify Happiness as a Psychiatric Disorder.

Abstract: It is proposed that happiness be classified as a psychiatric disorder and be included in future editions of the major diagnostic manuals under the new name: major affective disorder, pleasant type. In a review of the relevant literature it is shown that happiness is statistically abnormal, consists of a discrete cluster of symptoms, is associated with a range of cognitive abnormalities, and probably reflects the abnormal functioning of the central nervous system. One possible objection to this proposal remains—that happiness is not negatively valued. However, this objection is dismissed as scientifically irrelevant.

Happiness is a phenomenon that has received very little attention from psychopathologists, perhaps because it is not normally regarded as a cause for therapeutic concern. For this reason, research on the topic of happiness has been rather limited and any statement of existing knowledge about the phenomenon must therefore be supplemented by uncontrolled clinical observation. Nonetheless, I will argue that there is a prima facie case for classifying happiness as a psychiatric disorder, suitable for inclusion in future revisions of diagnostic manuals such as the American Psychiatric Association’s Diagnostic and Statistical Manual or the World Health Organisation’s International Classification of Diseases. I am aware that this proposal is counter-intuitive and likely to be resisted by the psychological and psychiatric community. However, such resistance will have to explain the relative security of happiness as a psychiatric disorder as compared with less secure, though established conditions such as schizophrenia. In anticipation of the likely resistance to my proposal I will therefore preface my arguments with a brief review of the existing scientific literature on happiness. Much of the following account is based on the work of Argyle [1].

It is perhaps premature to attempt an exact definition of happiness. However, despite the fact that formal diagnostic criteria have yet to be agreed, it seems likely that happiness has affective, cognitive and behavioural components. Thus, happiness is usually characterized by a positive mood, sometimes described as “elation” or “joy”, although this may be relatively absent in the milder happy states, sometimes termed “contentment”. Argyle, in his review of the relevant empirical literature, focuses more on the cognitive components of happiness, which he describes in terms of a general satisfaction with specific areas of life such as relationships and work and also in terms of the happy person’s belief in his or her own competence and self-efficacy. The behavioural components of happiness are less easily characterized but particular facial expressions such as “smiling” have been noted; interestingly there is evidence that these expressions are common across cultures, which suggests that they may be biological in origin [2]. Uncontrolled observations, such as those found in plays and novels, suggest that happy people are often carefree, impulsive and unpredictable in their actions. Certain kinds of social behaviour have also been reported to accompany happiness, including a high frequency of recreational interpersonal contacts and pro- social actions towards others identified as less happy [3]. This latter observation may help to explain the persistence of happiness despite its debilitating consequences (to be described bellow): happy people seem to wish to force their condition on their unhappy companions and relatives. In the absence of well-established physiological markers of happiness, it seems likely that the subjective mood state will continue to be the most widely recognized indicator of the condition. Indeed, Argyle has remarked that “If people say they are happy then they are happy” [4]. In this regard, the rules for identifying happiness are remarkably similar to those used by psychiatrists to identify other disorders, for example depression.

The epidemiology of happiness has hardly been researched. Although it seems likely that happiness is a relatively rare phenomenon, exact incidence rates must depend on the criteria for happiness employed in any particular survey (In this respect happiness is also not unique: similar problems have been encountered when attempts have been made to investigate the epidemiology of other psychiatric disorders such as schizophrenia [5]). Thus although Warr and Paync [6] found that as many as 25 per cent of a British sample said that they were “very pleased with things yesterday”, Andrews and Whitey [7], studying a large US sample, found that only 5.5 per cent of their subjects rated themselves as scoring maximum on a nine-point scale of life-satisfaction. One problem with these kind of data is that they have been generated in the absence of good operational criteria for happiness and have focused on the cognitive components of the condition (perhaps because these are comparatively easy to measure) rather than the affective and behavioural components. It is therefore quite possible that informal observation is a better guide to the prevalence of happiness in community samples. Certainly, if television soap operas in any way reflect real life, happiness is a very rare phenomenon indeed in places as far apart as Manchester, the East End of London and Australia. Interestingly, despite all the uncertainty about the epidemiology of happiness, there is some evidence that it is unevenly distributed amongst the social classes: individuals in the higher socio-economic groupings generally report greater positive affect [8] which may reflect the fact that they are most frequently exposed to environmental risk-factors for happiness.

Further light might be shed on the nature of happiness by considering its aetiology. Although the causes or causes of happiness have yet to be identified, aetiological theories have implicated both environmental and biological factors. With respect to the environment, there seems little doubt that discrete episodes of happiness typically follow positive life events [9]. However, the observation that some people are generally happier than others suggest that less transient factors may also play an important role. While it has been suggested that a general disposition towards happiness is related to self-esteem [10] and social skills [1], two variables which presumably reflect early learning experiences, the finding that extroversion is a good predictor of happiness even years in the future [11] suggest that biological factors may be implicated.

Evidence that happiness is related to cognitive abnormalities will be outlined bellow when I discuss the proposition that happiness is irrational. Genetic studies of happiness are a neglected avenue of research but neurophysiological evidence points to the involvement of certain brain centres and biochemical systems. Thus, stimulation of various brain regions has been found to elicit the affective and behavioural components of happiness in animals [12] as has the administration of drugs which affect the central nervous system such as amphetamine and alcohol [13].

Taking the environmental and biological evidence together it may be necessary to discriminate between various different types of happiness. Thus, it may be useful to distinguish between reactive happiness, usually manifesting itself as an acute episode followed by a rapid remission of symptoms, and endogenous happiness which may have a relatively chronic onset and which may be less often followed by symptomatic improvement. The differential diagnosis of these two types of happiness is an obvious project for future studies. Given the apparent similarities between happiness and depression, it seems possible that endogenous happiness will be characterised by positive mood first thing in morning, a heavy appetite and persistent erotomania. 

Link: Why French Kids Don't Have ADHD

In the United States, at least 9% of school-aged children have been diagnosed with ADHD, and are taking pharmaceutical medications. In France, the percentage of kids diagnosed and medicated for ADHD is less than .5%. How come the epidemic of ADHD—which has become firmly established in the United States—has almost completely passed over children in France?

Is ADHD a biological-neurological disorder? Surprisingly, the answer to this question depends on whether you live in France or in the United States. In the United States, child psychiatrists consider ADHD to be a biological disorder with biological causes. The preferred treatment is also biological—psycho stimulant medications such as Ritalin and Adderall.

French child psychiatrists, on the other hand, view ADHD as a medical condition that has psycho-social and situational causes. Instead of treating children’s focusing and behavioral problems withdrugs, French doctors prefer to look for the underlying issue that is causing the child distress—not in the child’s brain but in the child’s social context. They then choose to treat the underlying social context problem with psychotherapy or family counseling. This is a very different way of seeing things from the American tendency to attribute all symptoms to a biological dysfunction such as a chemical imbalance in the child’s brain.

French child psychiatrists don’t use the same system of classification of childhood emotional problems as American psychiatrists. They do not use the Diagnostic and Statistical Manual of Mental Disorders or DSM.According to Sociologist Manuel Vallee, the French Federation of Psychiatry developed an alternative classification system as a resistance to the influence of the DSM-3. This alternative was the CFTMEA(Classification Française des Troubles Mentaux de L’Enfant et de L’Adolescent), first released in 1983, and updated in 1988 and 2000. The focus of CFTMEA is on identifying and addressing the underlying psychosocial causes of children’s symptoms, not on finding the best pharmacological bandaids with which to mask symptoms.

To the extent that French clinicians are successful at finding and repairing what has gone awry in the child’s social context, fewer children qualify for the ADHD diagnosis. Moreover, the definition of ADHD is not as broad as in the American system, which, in my view, tends to “pathologize” much of what is normal childhood behavior. The DSMspecifically does not consider underlying causes. It thus leads clinicians to give the ADHD diagnosis to a much larger number of symptomatic children, while also encouraging them to treat those children with pharmaceuticals.

The French holistic, psycho-social approach also allows for considering nutritional causes for ADHD-type symptoms—specifically the fact that the behavior of some children is worsened after eating foods with artificial colors, certain preservatives, and/or allergens. Clinicians who work with troubled children in this country—not to mention parents of many ADHD kids—are well aware that dietary interventions can sometimes help a child’s problem. In the United States, the strict focus on pharmaceutical treatment of ADHD, however, encourages clinicians to ignore the influence of dietary factors on children’s behavior.

And then, of course, there are the vastly different philosophies of child-rearing in the United States and France. These divergent philosophies could account for why French children are generally better-behaved than their American counterparts. Pamela Druckerman highlights the divergent parenting styles in her recent book, Bringing up Bébé. I believe her insights are relevant to a discussion of why French children are not diagnosed with ADHD in anything like the numbers we are seeing in the United States.


One Flew Over the Cuckoo’s Nest by Ken Kesey

Tyrannical Nurse Ratched rules her ward in an Oregon State mental hospital with a strict and unbending routine, unopposed by her patients, who remain cowed by mind-numbing medication and the threat of electric shock therapy. But her regime is disrupted by the arrival of McMurphy – the swaggering, fun-loving trickster with a devilish grin who resolves to oppose her rules on behalf of his fellow inmates. His struggle is seen through the eyes of Chief Bromden, a seemingly mute half-Indian patient who understands McMurphy’s heroic attempt to do battle with the powers that keep them imprisoned. Ken Kesey’s extraordinary first novel is an exuberant, ribald and devastatingly honest portrayal of the boundaries between sanity and madness.

One Flew Over the Cuckoo’s Nest by Ken Kesey

Tyrannical Nurse Ratched rules her ward in an Oregon State mental hospital with a strict and unbending routine, unopposed by her patients, who remain cowed by mind-numbing medication and the threat of electric shock therapy. But her regime is disrupted by the arrival of McMurphy – the swaggering, fun-loving trickster with a devilish grin who resolves to oppose her rules on behalf of his fellow inmates. His struggle is seen through the eyes of Chief Bromden, a seemingly mute half-Indian patient who understands McMurphy’s heroic attempt to do battle with the powers that keep them imprisoned. Ken Kesey’s extraordinary first novel is an exuberant, ribald and devastatingly honest portrayal of the boundaries between sanity and madness.

Link: Our Age of Anxiety

In his controversial bookAmerican Nervousness: Its Causes and Consequences (1881), the neurologist George M. Beard proclaimed that Americans in the 19th century led all civilized nations in their susceptibility to nervous, anxious, and depressive disorders. Beard named the mixture of negative emotions “neurasthenia” and attributed it to five developments in “modern civilization”—steam power, the periodical press, the telegraph, the sciences, and the mental activity of women. In those major signs of modernity—and dozens of related ones, such as buying stocks on margin—the United States, he argued, was both “peculiar and pre-eminent” among advanced societies.

Beard claimed that American nervousness “is the product of American civilization,” and that this “distinguished malady” was seen most often among the cultural elite and the “brain-workers.” (Indeed, he had suffered from it as a student at Yale University.) Neurasthenia was strongly gendered, but it was an acceptable, even prestigious disease for male intellectuals, professionals, writers, and artists.

The physician Silas Weir Mitchell famously prescribed a “rest cure” for female neurasthenics, to slow down their dangerous mental activity and forcibly restore them to a traditionally passive feminine role. Neurasthenic men, however, were encouraged to steel their nerves and recover their masculine self-control through rugged exercise­—ideally the “West cure,” of horseback riding, hunting, and camping in California or Colorado. Beard, like many of his contemporaries, was interested in solutions to the malady of modern civilization, which offered its most advanced form in the United States. As he concluded, “He who has solved the problem of nervousness as it appears in America shall find its problems in other lands already solved for him.”

Neurasthenia peaked in the early 1900s, according to the literary critic Tom Lutz’s fascinating cultural study, American Nervousness, 1903 (Cornell University Press, 1991). But a century later, we are in the midst of a new epidemic. Depression, angst, panic, stress—whatever you choose to call it, there is clearly a lot of it going around.

"It has not escaped many observers that today we are drenched in anxiety," says the medical historian Edward Shorter in his new book, How Everyone Became Depressed: The Rise and Fall of the Nervous Breakdown (Oxford University Press). “Depression has become a mass illness.” Shorter cites statistical evidence: “Within a given year, one in 10 Americans today will have a mood disorder, the great majority of them major depression.” The psychiatrist Jeffrey P. Kahn sees an even worse trend in his recent Angst: Origins of Anxiety and Depression (Oxford), with “the commonplace anxiety and depressive disorders” affecting at least 20 percent of Americans. That’s some 60 million people.

The New York Times even has an online series dedicated to anxiety (submissions to anxiety@nytimes.com), with regular posts on such cheering topics as the dire residential options for the elderly (“You Are Going to Die,” January 20, 2013), as well as regular apocalyptic updates on poverty, guns, violence, terrorism, antibiotic-resistant superbugs, climate change, and the inexorable decline of all the professions, occupations, and institutions we thought were eternal. A report from the World Health Organization says that, despite its wealth, the United States is the most anxious nation. “America’s precocious levels of anxiety are not just happening in spite of the great national happiness rat race but also, perhaps, because of it,” suggests Ruth Whippman in “America the Anxious” (the Times blog, September 22, 2012). Hollywood, too, is cashing in on the age of anxiety, with noir thrillers like Side Effects, about the opportunities and risks of antidepressant medications.

Not surprisingly, the rise of American angst is also the subject of a flood of new books, including memoirs, literary criticism, and medical and historical studies. Looking at several of them suggests that the search for solutions has also created some of the problems, through publicizing a category of distress.

In Angst, Kahn, who is a clinical associate professor of psychiatry at New York-Presbyterian Hospital/Weill Cornell Medical Center, sees depression and anxiety as the contemporary forms of primeval instincts of self- and social preservation. He looks at five major syndromes of depressive disorders­—panic anxiety, social anxiety, obsessive-compulsive disorder, atypical depression, and melancholic (or suicidal) depression—suggesting how each one could have evolved from some Darwinian drive to help organisms survive. Melancholic depression, for example, “kept us from using scarce resources when no longer useful to the group”—in other words, it led to the aged and infirm stepping onto the ice floe.

Kahn locates himself in the field of evolutionary psychopathology, a subset of the evolutionary biology that has seeped into much of our thinking about our maladies, our family patterns, even our literature. He believes the symptoms of angst are so common and widespread that they “must have served some kind of purpose for us in our evolutionary past.” He readily admits that there is not much hard scientific evidence for his ideas, let alone a “smoking gene” for disorders like panic anxiety.

Link: Depressive Realism: Happiness or Objectivity

Realism is described as objective evaluations and judgments about the world; however, some research indicates that judgments made by “normal” people include a self-favored, positive bias in the perception of reality. Additionally, some studies report that compared to normal people, such cognitive distortions are less likely among depressive people. These findings gave rise to the depressive realism hypothesis. While results of several studies verify the notion that depressive people evaluate reality more objectively, other studies fail to support this hypothesis. Several causes for these inconsistent findings have been proposed, which can be characterized under 3 headings. One proposed explanation suggests that what is accepted as “realistic” in these studies is not quite objective and is in fact ambiguous. According to another perspective, the term “depressive” used in these studies is inconsistent with the criteria of scientific diagnostic methods. Another suggests that the research results can only be obtained under the specific experimental conditions. General negativity and limited processing are popular approaches used for explaining the depressive realism hypothesis. Nowadays, the debate over this hypothesis continues. The present review focuses on frequently cited research related to depressive realism and discusses the findings.

Psychiatry even works on the assumption that the ‘healthy’ and viable is at one with the highest in personal terms. Depression, ‘fear of life’, refusal of nourishment and so on are invariably taken as signs of a pathological state and treated thereafter. Often, however, such phenomena are messages from a deeper, more immediate sense of life, bitter fruits of a geniality of thought or feeling at the root of antibiological tendencies. It is not the soul being sick, but its protection failing, or else being rejected because it is experienced—correctly—as a betrayal of ego’s highest potential.
— Peter Wessel Zapffe, The Last Messiah

(via damnfinecupoftea-deactivated201)

Link: Mental Disorder or Neurodiversity?

One of the most famous stories of H. G. Wells, “The Country of the Blind” (1904), depicts a society, enclosed in an isolated valley amid forbidding mountains, in which a strange and persistent epidemic has rendered its members blind from birth. Their whole culture is reshaped around this difference: their notion of beauty depends on the feel rather than the look of a face; no windows adorn their houses; they work at night, when it is cool, and sleep during the day, when it is hot. A mountain climber named Nunez stumbles upon this community and hopes that he will rule over it: “In the Country of the Blind the One-Eyed Man is King,” he repeats to himself. Yet he comes to find that his ability to see is not an asset but a burden. The houses are pitch-black inside, and he loses fights to local warriors who possess extraordinary senses of touch and hearing. The blind live with no knowledge of the sense of sight, and no need for it. They consider Nunez’s eyes to be diseased, and mock his love for a beautiful woman whose face feels unattractive to them. When he finally fails to defeat them, exhausted and beaten, he gives himself up. They ask him if he still thinks he can see: “No,” he replies, “That was folly. The word means nothing — less than nothing!” They enslave him because of his apparently subhuman disability. But when they propose to remove his eyes to make him “normal,” he realizes the beauty of the mountains, the snow, the trees, the lines in the rocks, and the crispness of the sky — and he climbs a mountain, attempting to escape.

Wells’s eerie and unsettling story addresses how we understand differences that run deep into the mind and the brain. What one man thinks of as his heightened ability, another thinks of as a disability. This insight about the differences between ways of viewing the world runs back to the ancients: in Plato’s Phaedrus, Socrates discusses how insane people experience life, telling Phaedrus that madness is not “simply an evil.” Instead, “there is also a madness which is a divine gift, and the source of the chiefest blessings granted to men.” The insane, Socrates suggests, are granted a unique experience of the world, or perhaps even special access to its truths — seeing it in a prophetic or artistic way.

Today, some psychologists, journalists, and advocates explore and celebrate mental differences under the rubric of neurodiversity. The term encompasses those with Attention Deficit/Hyperactivity Disorder (ADHD), autism, schizophrenia, depression, dyslexia, and other disorders affecting the mind and brain. People living with these conditions have written books, founded websites, and started groups to explain and praise the personal worlds of those with different neurological “wiring.” The proponents of neurodiversity argue that there are positive aspects to having brains that function differently; many, therefore, prefer that we see these differences simply as differences rather than disorders. Why, they ask, should what makes them them need to be classified as a disability?

But other public figures, including many parents of affected children, focus on the difficulties and suffering brought on by these conditions. They warn of the dangers of normalizing mental disorders, potentially creating reluctance among parents to provide treatments to children — treatments that researchers are always seeking to improve. The National Institute of Mental Health, for example, has been doing extensive research on the physical and genetic causes of various mental conditions, with the aim of controlling or eliminating them.

Disagreements, then, abound. What does it mean to see and experience the world in a different way? What does it mean to be a “normal” human being? What does it mean to be abnormal, disordered, or sick? And what exactly would a cure for these disorders look like? 


Joe Arridy was the happiest man on death row
Joe Arridy didn’t ask for a last meal. It’s doubtful that he even understood the concept.
He was 23 years old and had an IQ of 46. He knew about eating and playing and trains, things you could see and smell and experience. But abstractions, like God and justice and evil, eluded him. The doctors called him an imbecile — in those days, a clinical term for someone who has the mental capacity of a child between four and six years old, someone considered more capable than an idiot but not quite as swift as a moron.
The newspapers of the 1930s had other names for him. “Feeble-minded killer.” “Weak-witted sex slayer.” “Perverted maniac.”
Like Ricky Ray Rector, the lobotomized Arkansas killer who told his executioners that he was saving a slice of pecan pie in his cell “for later,” Arridy had trouble grasping the finality of what was about to happen to him. The mystery of death had baffled much deeper thinkers than Joe Arridy. How could he be expected to fathom the ritual of a last meal?
So when they told him he could eat what he wanted, he asked for ice cream. Lots of ice cream. Ice cream all day long. Ice cream and his toy train — that was Joe’s idea of fun.
Hour after hour, day after day, Arridy would reach through the bars of his cell and send his wind-up train chugging down the corridor of death row in the Colorado State Penitentiary. Other condemned prisoners would reach out from their cells, cause diversions and train wrecks and rescues that made Arridy laugh, then send the train back to its delighted owner for another go-round.
But on the last day, the day of ice cream — January 6, 1939 — the train’s busy schedule was interrupted by a farewell visit from Joe’s mother, aunt, cousin and fourteen-year-old sister. His mother shuddered and sobbed. Dry-eyed and perplexed, Joe stared at her. Then the women left, and Joe returned to his train.

Joe Arridy was the happiest man on death row

Joe Arridy didn’t ask for a last meal. It’s doubtful that he even understood the concept.

He was 23 years old and had an IQ of 46. He knew about eating and playing and trains, things you could see and smell and experience. But abstractions, like God and justice and evil, eluded him. The doctors called him an imbecile — in those days, a clinical term for someone who has the mental capacity of a child between four and six years old, someone considered more capable than an idiot but not quite as swift as a moron.

The newspapers of the 1930s had other names for him. “Feeble-minded killer.” “Weak-witted sex slayer.” “Perverted maniac.”

Like Ricky Ray Rector, the lobotomized Arkansas killer who told his executioners that he was saving a slice of pecan pie in his cell “for later,” Arridy had trouble grasping the finality of what was about to happen to him. The mystery of death had baffled much deeper thinkers than Joe Arridy. How could he be expected to fathom the ritual of a last meal?

So when they told him he could eat what he wanted, he asked for ice cream. Lots of ice cream. Ice cream all day long. Ice cream and his toy train — that was Joe’s idea of fun.

Hour after hour, day after day, Arridy would reach through the bars of his cell and send his wind-up train chugging down the corridor of death row in the Colorado State Penitentiary. Other condemned prisoners would reach out from their cells, cause diversions and train wrecks and rescues that made Arridy laugh, then send the train back to its delighted owner for another go-round.

But on the last day, the day of ice cream — January 6, 1939 — the train’s busy schedule was interrupted by a farewell visit from Joe’s mother, aunt, cousin and fourteen-year-old sister. His mother shuddered and sobbed. Dry-eyed and perplexed, Joe stared at her. Then the women left, and Joe returned to his train.


Daniel Pick on Nazism and Psychoanalysis
The author of The Pursuit of the Nazi Mind tells us what we can learn from attempts to use psychology, psychiatry and psychoanalysis to understand Nazism.
Your latest book, The Pursuit of the Nazi Mind, is a historical study of American and British attempts to use psychology, psychiatry and psychoanalysis to delve into the motivations of the Nazi leadership and the mentality of the so-called masses. When did the Allies begin these efforts?
There are several starting points, but a key moment occurred in 1943. That was when the head of the Office of Strategic Services (OSS), the US intelligence agency set up by President Roosevelt, invited the psychoanalyst Walter Langer and Harvard psychologist Henry Murray to produce studies of Adolf Hitler’s mind. It was hoped that psychological profiles of the leader would serve a useful intelligence purpose, although whether they did is another matter.
That was part of the endeavour to harness Freudian thought, alongside other disciplines, such as cultural anthropology, in the war. Many other projects and reports emerged on both sides of the Atlantic seeking to decipher what was going on and to read between the lines. Various psychoanalytic writers were deployed, for example, to study Hitler’s speeches and to speculate about what they might reveal of his mental state. Such analysts were equally concerned to comprehend how and why his persona aroused such massive enthusiasm in the 1930s, and such sustained loyalty in the 1940s.
The British were able to analyse a leading Nazi figure close up, weren’t they, when Hitler’s deputy in the Nazi Party, Rudolf Hess, unexpectedly arrived in the country?
That’s right. Hess arrives in 1941, flying across the North Sea, and hoping to meet the Duke of Hamilton and perhaps the King, and to negotiate a peace settlement, bypassing Churchill. But instead he becomes a prisoner of state.
While in custody he is put under the care of army doctors. What I try and show in my book is that a number of the doctors increasingly take an interest in his mental state and come to regard him as an important case, even an exemplar of a fascist personality type at large. They are interested in observing him, trying to make sense of his political attachments and to make some inferences about what drew him to Nazism and to ask about the larger implications of his case study. Questions of “normality” and “abnormality” became central in these investigations.
What conclusions did all these studies about the Nazi mind reach?
There were diverse findings, some of which, to be sure, seem entirely dated. Others have more resonance and also paved the way to new forms of empirical and theoretical study that emerged after the war. A controversial classic of post-war sociology that has many affinities with this wartime literature was Theodor Adorno et al’s The Authoritarian Personality in 1950.
Wartime clinicians sought to probe forms of personality, even to speculate about national character, reviving an old and questionable tradition of thought. The individual profiles of Hitler were perhaps the most dramatic illustration of the belief that you could really get inside the unconscious mind of an individual who was not a patient. It was a case of what Freud had earlier warned against – “wild analysis”, outside the consulting room. But that is not to say they had nothing interesting to say. Moreover, the question of what Hitler represented in the unconscious minds of others was also to generate a variety of hypotheses. For instance, there was an idea that in the Fuhrer someone like Hess found an omnipotent substitute for a tyrannical father figure in his own life and most importantly inside his own mind.
A range of commentators were interested in exploring the psycho-politics of obedience. Here, alongside the question of fear and coercion, powerful forms of excitement and deep identifications and desires were also to be considered. On the one side were studies of the presumed, or recorded thoughts and fantasies of individual people – on the other, an interest in the kinds of fantasies that were mobilised in the culture itself. 
World War II proved to be a key period for experiments in and theories of group behaviour, and was to prove a major impetus to the development of the tradition of group therapy after 1945. Many considered the great choreographed festivals of Nazism as a horrifying sign of the times, a demonstration of how the so-called “mass” could come to celebrate its own obedience and subordination to the master. When Leni Riefenstahl made a film about one of the Nuremberg rallies in the 1930s, the title chosen for the work was The Triumph of the Will. From the very opening shots of the movie onwards, the preternatural master will was clearly signalled, as Hitler is shown descending on a plane from the clouds, before eventually speaking to thousands of ecstatic followers.

Daniel Pick on Nazism and Psychoanalysis

The author of The Pursuit of the Nazi Mind tells us what we can learn from attempts to use psychology, psychiatry and psychoanalysis to understand Nazism.

Your latest book, The Pursuit of the Nazi Mind, is a historical study of American and British attempts to use psychology, psychiatry and psychoanalysis to delve into the motivations of the Nazi leadership and the mentality of the so-called masses. When did the Allies begin these efforts?

There are several starting points, but a key moment occurred in 1943. That was when the head of the Office of Strategic Services (OSS), the US intelligence agency set up by President Roosevelt, invited the psychoanalyst Walter Langer and Harvard psychologist Henry Murray to produce studies of Adolf Hitler’s mind. It was hoped that psychological profiles of the leader would serve a useful intelligence purpose, although whether they did is another matter.

That was part of the endeavour to harness Freudian thought, alongside other disciplines, such as cultural anthropology, in the war. Many other projects and reports emerged on both sides of the Atlantic seeking to decipher what was going on and to read between the lines. Various psychoanalytic writers were deployed, for example, to study Hitler’s speeches and to speculate about what they might reveal of his mental state. Such analysts were equally concerned to comprehend how and why his persona aroused such massive enthusiasm in the 1930s, and such sustained loyalty in the 1940s.

The British were able to analyse a leading Nazi figure close up, weren’t they, when Hitler’s deputy in the Nazi Party, Rudolf Hess, unexpectedly arrived in the country?

That’s right. Hess arrives in 1941, flying across the North Sea, and hoping to meet the Duke of Hamilton and perhaps the King, and to negotiate a peace settlement, bypassing Churchill. But instead he becomes a prisoner of state.

While in custody he is put under the care of army doctors. What I try and show in my book is that a number of the doctors increasingly take an interest in his mental state and come to regard him as an important case, even an exemplar of a fascist personality type at large. They are interested in observing him, trying to make sense of his political attachments and to make some inferences about what drew him to Nazism and to ask about the larger implications of his case study. Questions of “normality” and “abnormality” became central in these investigations.

What conclusions did all these studies about the Nazi mind reach?

There were diverse findings, some of which, to be sure, seem entirely dated. Others have more resonance and also paved the way to new forms of empirical and theoretical study that emerged after the war. A controversial classic of post-war sociology that has many affinities with this wartime literature was Theodor Adorno et al’s The Authoritarian Personality in 1950.

Wartime clinicians sought to probe forms of personality, even to speculate about national character, reviving an old and questionable tradition of thought. The individual profiles of Hitler were perhaps the most dramatic illustration of the belief that you could really get inside the unconscious mind of an individual who was not a patient. It was a case of what Freud had earlier warned against – “wild analysis”, outside the consulting room. But that is not to say they had nothing interesting to say. Moreover, the question of what Hitler represented in the unconscious minds of others was also to generate a variety of hypotheses. For instance, there was an idea that in the Fuhrer someone like Hess found an omnipotent substitute for a tyrannical father figure in his own life and most importantly inside his own mind.

A range of commentators were interested in exploring the psycho-politics of obedience. Here, alongside the question of fear and coercion, powerful forms of excitement and deep identifications and desires were also to be considered. On the one side were studies of the presumed, or recorded thoughts and fantasies of individual people – on the other, an interest in the kinds of fantasies that were mobilised in the culture itself. 

World War II proved to be a key period for experiments in and theories of group behaviour, and was to prove a major impetus to the development of the tradition of group therapy after 1945. Many considered the great choreographed festivals of Nazism as a horrifying sign of the times, a demonstration of how the so-called “mass” could come to celebrate its own obedience and subordination to the master. When Leni Riefenstahl made a film about one of the Nuremberg rallies in the 1930s, the title chosen for the work was The Triumph of the Will. From the very opening shots of the movie onwards, the preternatural master will was clearly signalled, as Hitler is shown descending on a plane from the clouds, before eventually speaking to thousands of ecstatic followers.

Link: Paris Syndrome

Paris syndrome (French: Syndrome de Paris, Japanese: パリ症候群, Pari shōkōgun) is a transient psychological disorder encountered by certain individuals, in most cases from Japan, visiting or vacationing in Paris, France. It is similar in nature to Jerusalem syndrome and Stendhal syndrome.

Japanese visitors are observed to be especially susceptible. It was first noted in Nervure, the French journal of psychiatry in 2004. From the estimated six million yearly visitors, the number of reported cases is not significant: according to an administrator at the Japanese embassy in France, around twenty Japanese tourists a year are affected by the syndrome. The susceptibility of Japanese people may be linked to the popularity of Paris in Japanese culture, notably the idealized image of Paris prevalent in Japanese advertising.

Mario Renoux, the president of the Franco-Japanese Medical Association, states in Libération’s article “Des Japonais entre mal du pays et mal de Paris” (December 13, 2004) that Japanese magazines are primarily responsible for creating this syndrome. Renoux indicates that Japanese media, magazines in particular, often depict Paris as a place where most people on the street look like stick-thin models and most women dress in high-fashion brands.

Paris Syndrome is characterized by a number of psychiatric symptoms such as acute delusional states, hallucinations, feelings of persecution (perceptions of being a victim of prejudice, aggression, or hostility from others), derealization, depersonalization, anxiety, and also psychosomatic manifestations such as dizziness, tachycardia, sweating, and others.[5]

The authors of the journal cite the following matters as factors that combine to induce the phenomenon:

  1. Language barrier - few Japanese speak French and vice versa. This is believed to be the principal cause and is thought to engender the remainder. Apart from the obvious differences between French and Japanese, many everyday phrases and idioms are shorn of meaning and substance when translated, adding to the confusion of some who have not previously encountered such.
  2. Cultural difference - the large difference between not only the languages but the manner. The French can communicate on an informal level in comparison to the rigidly formal Japanese culture, which proves too great a difficulty for some Japanese visitors. It is thought that it is the rapid and frequent fluctuations in mood, tense and attitude, especially in the delivery of humour, which cause the most difficulty.
  3. Idealised image of Paris - it is also speculated as manifesting from an individual’s inability to reconcile a disparity between the Japanese popular image and the reality of Paris.
  4. Exhaustion - finally, it is thought that the over-booking of one’s time and energy, whether on a business trip or on holiday, in attempting to cram too much into every moment of a stay in Paris, along with the effects of jet lag, all contribute to the psychological destabilization of some visitors.

Prozac Campus: the Next Generation
In an accelerated culture, 15 years is a long time. And last spring, when a stiff, cream-colored envelope arrived in the mail to announce preparations for my 10th college reunion, I realized that it had been nearly that long since my experience with antidepressants began.
When the envelope came, I was at work on a book about my generation’s relationship to psychiatric drugs. The book opened with a memory from the fall of 1997, when I was a dumped, homesick, anxious, and tearful freshman. I sought guidance in my school’s health and counseling center, where I was quickly treated to a remedy that seemed exotic—a diagnosis of depression and a prescription for a pill known as an SSRI, or selective serotonin reuptake inhibitor. Over the following months, I realized with a mounting sense of shock how many of my classmates were using medication, too.
For those of us who were teenagers in the 1990s, this feeling of surprise was fundamental to our experience of psychiatric drugs. In our midteen years, antidepressants and medication for attention-deficit hyperactivity disorder hadn’t been everywhere, and then suddenly they were. We attended college during the first report of a psychopharmaceutical explosion.
But people born in the late 80s and early 90s were raised in a very different world. They never knew a time before Prozac, can scarcely remember when advertisements for prescription medication didn’t peer out from bus shelters or blare from TV. Prompted by the arrival of my reunion invitation, I began to wonder whether psychiatric medication meant something different to this new generation of students than it had to mine.
My interest was piqued by two sensationalistic but widely reported stories a few years ago. The first was of a precipitous deterioration in college students’ mental health. One survey of incoming college freshmen found that the self-reported mental well-being of this group had fallen to its lowest level since the survey began 25 years earlier. Another major survey announced that 30 percent of college students had felt “so depressed that it was difficult to function” at some point in the preceding year. College mental-health staffs across the country reported facing an unprecedented volume of requests for service and a nearly ceaseless stream of psychiatric emergencies.
The second story was of a stark rise in the amount of academic stress faced by college students. Reports noted that undergraduate admissions have become more selective in the past decade. Today’s students apply to more schools, endure more rejection, and live their precollege lives keenly attuned to the need to compete. Deans had noticed a more serious bent among college students lately, describing a group that was apt to approach college as though it were a professional job, rather than a time for exploration. One college president lamented that the “moments of woolgathering, dreaming, improvisation” that were integral to a liberal-arts education a generation ago had become a hard sell for today’s crop of highly driven students. Sometimes the stories about stress on campus implied that this new breed of students were the type of kids—from affluent, self-aware, achievement-oriented families—who had been raised to view antidepressants and ADHD medications as a means of keeping up.
Were these stories true, I wondered? What role did medication play on campus now, and what did students’ attitudes toward it augur for the future? With those questions in mind, I decided to return to a college whose size and orientation reminded me a little bit of my own, to look for the change that 15 years had brought.

Prozac Campus: the Next Generation

In an accelerated culture, 15 years is a long time. And last spring, when a stiff, cream-colored envelope arrived in the mail to announce preparations for my 10th college reunion, I realized that it had been nearly that long since my experience with antidepressants began.

When the envelope came, I was at work on a book about my generation’s relationship to psychiatric drugs. The book opened with a memory from the fall of 1997, when I was a dumped, homesick, anxious, and tearful freshman. I sought guidance in my school’s health and counseling center, where I was quickly treated to a remedy that seemed exotic—a diagnosis of depression and a prescription for a pill known as an SSRI, or selective serotonin reuptake inhibitor. Over the following months, I realized with a mounting sense of shock how many of my classmates were using medication, too.

For those of us who were teenagers in the 1990s, this feeling of surprise was fundamental to our experience of psychiatric drugs. In our midteen years, antidepressants and medication for attention-deficit hyperactivity disorder hadn’t been everywhere, and then suddenly they were. We attended college during the first report of a psychopharmaceutical explosion.

But people born in the late 80s and early 90s were raised in a very different world. They never knew a time before Prozac, can scarcely remember when advertisements for prescription medication didn’t peer out from bus shelters or blare from TV. Prompted by the arrival of my reunion invitation, I began to wonder whether psychiatric medication meant something different to this new generation of students than it had to mine.

My interest was piqued by two sensationalistic but widely reported stories a few years ago. The first was of a precipitous deterioration in college students’ mental health. One survey of incoming college freshmen found that the self-reported mental well-being of this group had fallen to its lowest level since the survey began 25 years earlier. Another major survey announced that 30 percent of college students had felt “so depressed that it was difficult to function” at some point in the preceding year. College mental-health staffs across the country reported facing an unprecedented volume of requests for service and a nearly ceaseless stream of psychiatric emergencies.

The second story was of a stark rise in the amount of academic stress faced by college students. Reports noted that undergraduate admissions have become more selective in the past decade. Today’s students apply to more schools, endure more rejection, and live their precollege lives keenly attuned to the need to compete. Deans had noticed a more serious bent among college students lately, describing a group that was apt to approach college as though it were a professional job, rather than a time for exploration. One college president lamented that the “moments of woolgathering, dreaming, improvisation” that were integral to a liberal-arts education a generation ago had become a hard sell for today’s crop of highly driven students. Sometimes the stories about stress on campus implied that this new breed of students were the type of kids—from affluent, self-aware, achievement-oriented families—who had been raised to view antidepressants and ADHD medications as a means of keeping up.

Were these stories true, I wondered? What role did medication play on campus now, and what did students’ attitudes toward it augur for the future? With those questions in mind, I decided to return to a college whose size and orientation reminded me a little bit of my own, to look for the change that 15 years had brought.


Can You Call a 9-Year-Old a Psychopath?
One day last summer, Anne and her husband, Miguel, took their 9-year-old son, Michael, to a Florida elementary school for the first day of what the family chose to call “summer camp.” For years, Anne and Miguel have struggled to understand their eldest son, an elegant boy with high-planed cheeks, wide eyes and curly light brown hair, whose periodic rages alternate with moments of chilly detachment. Michael’s eight-week program was, in reality, a highly structured psychological study — less summer camp than camp of last resort.
Michael’s problems started, according to his mother, around age 3, shortly after his brother Allan was born. At the time, she said, Michael was mostly just acting “like a brat,” but his behavior soon escalated to throwing tantrums during which he would scream and shriek inconsolably. These weren’t ordinary toddler’s fits. “It wasn’t, ‘I’m tired’ or ‘I’m frustrated’ — the normal things kids do,” Anne remembered. “His behavior was really out there. And it would happen for hours and hours each day, no matter what we did.” For several years, Michael screamed every time his parents told him to put on his shoes or perform other ordinary tasks, like retrieving one of his toys from the living room. “Going somewhere, staying somewhere — anything would set him off,” Miguel said. These furies lasted well beyond toddlerhood. At 8, Michael would still fly into a rage when Anne or Miguel tried to get him ready for school, punching the wall and kicking holes in the door. Left unwatched, he would cut up his trousers with scissors or methodically pull his hair out. He would also vent his anger by slamming the toilet seat down again and again until it broke.
When Anne and Miguel first took Michael to see a therapist, he was given a diagnosis of “firstborn syndrome”: acting out because he resented his new sibling. While both parents acknowledged that Michael was deeply hostile to the new baby, sibling rivalry didn’t seem sufficient to explain his consistently extreme behavior.
By the time he turned 5, Michael had developed an uncanny ability to switch from full-blown anger to moments of pure rationality or calculated charm — a facility that Anne describes as deeply unsettling. “You never know when you’re going to see a proper emotion,” she said. She recalled one argument, over a homework assignment, when Michael shrieked and wept as she tried to reason with him. “I said: ‘Michael, remember the brainstorming we did yesterday? All you have to do is take your thoughts from that and turn them into sentences, and you’re done!’ He’s still screaming bloody murder, so I say, ‘Michael, I thought we brainstormed so we could avoid all this drama today.’ He stopped dead, in the middle of the screaming, turned to me and said in this flat, adult voice, ‘Well, you didn’t think that through very clearly then, did you?’ ”
For the past 10 years, Waschbusch has been studying “callous-unemotional” children — those who exhibit a distinctive lack of affect, remorse or empathy — and who are considered at risk of becoming psychopaths as adults. To evaluate Michael, Waschbusch used a combination of psychological exams and teacher- and family-rating scales, including the Inventory of Callous-Unemotional Traits, the Child Psychopathy Scale and a modified version of the Antisocial Process Screening Device — all tools designed to measure the cold, predatory conduct most closely associated with adult psychopathy. (The terms “sociopath” and “psychopath” are essentially identical.) A research assistant interviewed Michael’s parents and teachers about his behavior at home and in school. When all the exams and reports were tabulated, Michael was almost two standard deviations outside the normal range for callous-unemotional behavior, which placed him on the severe end of the spectrum.
Currently, there is no standard test for psychopathy in children, but a growing number of psychologists believe that psychopathy, like autism, is a distinct neurological condition — one that can be identified in children as young as 5. Crucial to this diagnosis are callous-unemotional traits, which most researchers now believe distinguish “fledgling psychopaths” from children with ordinary conduct disorder, who are also impulsive and hard to control and exhibit hostile or violent behavior. According to some studies, roughly one-third of children with severe behavioral problems — like the aggressive disobedience that Michael displays — also test above normal on callous-unemotional traits. (Narcissism and impulsivity, which are part of the adult diagnostic criteria, are difficult to apply to children, who are narcissistic and impulsive by nature.)
In some children, C.U. traits manifest in obvious ways. Paul Frick, a psychologist at the University of New Orleans who has studied risk factors for psychopathy in children for two decades, described one boy who used a knife to cut off the tail of the family cat bit by bit, over a period of weeks. The boy was proud of the serial amputations, which his parents initially failed to notice. “When we talked about it, he was very straightforward,” Frick recalls. “He said: ‘I want to be a scientist, and I was experimenting. I wanted to see how the cat would react.’ ”
In another famous case, a 9-year-old boy named Jeffrey Bailey pushed a toddler into the deep end of a motel swimming pool in Florida. As the boy struggled and sank to the bottom, Bailey pulled up a chair to watch. Questioned by the police afterward, Bailey explained that he was curious to see someone drown. When he was taken into custody, he seemed untroubled by the prospect of jail but was pleased to be the center of attention.
In many children, though, the signs are subtler. Callous-unemotional children tend to be highly manipulative, Frick notes. They also lie frequently — not just to avoid punishment, as all children will, but for any reason, or none. “Most kids, if you catch them stealing a cookie from the jar before dinner, they’ll look guilty,” Frick says. “They want the cookie, but they also feel bad. Even kids with severe A.D.H.D.: they may have poor impulse control, but they still feel bad when they realize that their mom is mad at them.” Callous-unemotional children are unrepentant. “They don’t care if someone is mad at them,” Frick says. “They don’t care if they hurt someone’s feelings.” Like adult psychopaths, they can seem to lack humanity. “If they can get what they want without being cruel, that’s often easier,” Frick observes. “But at the end of the day, they’ll do whatever works best.”

Can You Call a 9-Year-Old a Psychopath?

One day last summer, Anne and her husband, Miguel, took their 9-year-old son, Michael, to a Florida elementary school for the first day of what the family chose to call “summer camp.” For years, Anne and Miguel have struggled to understand their eldest son, an elegant boy with high-planed cheeks, wide eyes and curly light brown hair, whose periodic rages alternate with moments of chilly detachment. Michael’s eight-week program was, in reality, a highly structured psychological study — less summer camp than camp of last resort.

Michael’s problems started, according to his mother, around age 3, shortly after his brother Allan was born. At the time, she said, Michael was mostly just acting “like a brat,” but his behavior soon escalated to throwing tantrums during which he would scream and shriek inconsolably. These weren’t ordinary toddler’s fits. “It wasn’t, ‘I’m tired’ or ‘I’m frustrated’ — the normal things kids do,” Anne remembered. “His behavior was really out there. And it would happen for hours and hours each day, no matter what we did.” For several years, Michael screamed every time his parents told him to put on his shoes or perform other ordinary tasks, like retrieving one of his toys from the living room. “Going somewhere, staying somewhere — anything would set him off,” Miguel said. These furies lasted well beyond toddlerhood. At 8, Michael would still fly into a rage when Anne or Miguel tried to get him ready for school, punching the wall and kicking holes in the door. Left unwatched, he would cut up his trousers with scissors or methodically pull his hair out. He would also vent his anger by slamming the toilet seat down again and again until it broke.

When Anne and Miguel first took Michael to see a therapist, he was given a diagnosis of “firstborn syndrome”: acting out because he resented his new sibling. While both parents acknowledged that Michael was deeply hostile to the new baby, sibling rivalry didn’t seem sufficient to explain his consistently extreme behavior.

By the time he turned 5, Michael had developed an uncanny ability to switch from full-blown anger to moments of pure rationality or calculated charm — a facility that Anne describes as deeply unsettling. “You never know when you’re going to see a proper emotion,” she said. She recalled one argument, over a homework assignment, when Michael shrieked and wept as she tried to reason with him. “I said: ‘Michael, remember the brainstorming we did yesterday? All you have to do is take your thoughts from that and turn them into sentences, and you’re done!’ He’s still screaming bloody murder, so I say, ‘Michael, I thought we brainstormed so we could avoid all this drama today.’ He stopped dead, in the middle of the screaming, turned to me and said in this flat, adult voice, ‘Well, you didn’t think that through very clearly then, did you?’ ”

For the past 10 years, Waschbusch has been studying “callous-unemotional” children — those who exhibit a distinctive lack of affect, remorse or empathy — and who are considered at risk of becoming psychopaths as adults. To evaluate Michael, Waschbusch used a combination of psychological exams and teacher- and family-rating scales, including the Inventory of Callous-Unemotional Traits, the Child Psychopathy Scale and a modified version of the Antisocial Process Screening Device — all tools designed to measure the cold, predatory conduct most closely associated with adult psychopathy. (The terms “sociopath” and “psychopath” are essentially identical.) A research assistant interviewed Michael’s parents and teachers about his behavior at home and in school. When all the exams and reports were tabulated, Michael was almost two standard deviations outside the normal range for callous-unemotional behavior, which placed him on the severe end of the spectrum.

Currently, there is no standard test for psychopathy in children, but a growing number of psychologists believe that psychopathy, like autism, is a distinct neurological condition — one that can be identified in children as young as 5. Crucial to this diagnosis are callous-unemotional traits, which most researchers now believe distinguish “fledgling psychopaths” from children with ordinary conduct disorder, who are also impulsive and hard to control and exhibit hostile or violent behavior. According to some studies, roughly one-third of children with severe behavioral problems — like the aggressive disobedience that Michael displays — also test above normal on callous-unemotional traits. (Narcissism and impulsivity, which are part of the adult diagnostic criteria, are difficult to apply to children, who are narcissistic and impulsive by nature.)

In some children, C.U. traits manifest in obvious ways. Paul Frick, a psychologist at the University of New Orleans who has studied risk factors for psychopathy in children for two decades, described one boy who used a knife to cut off the tail of the family cat bit by bit, over a period of weeks. The boy was proud of the serial amputations, which his parents initially failed to notice. “When we talked about it, he was very straightforward,” Frick recalls. “He said: ‘I want to be a scientist, and I was experimenting. I wanted to see how the cat would react.’ ”

In another famous case, a 9-year-old boy named Jeffrey Bailey pushed a toddler into the deep end of a motel swimming pool in Florida. As the boy struggled and sank to the bottom, Bailey pulled up a chair to watch. Questioned by the police afterward, Bailey explained that he was curious to see someone drown. When he was taken into custody, he seemed untroubled by the prospect of jail but was pleased to be the center of attention.

In many children, though, the signs are subtler. Callous-unemotional children tend to be highly manipulative, Frick notes. They also lie frequently — not just to avoid punishment, as all children will, but for any reason, or none. “Most kids, if you catch them stealing a cookie from the jar before dinner, they’ll look guilty,” Frick says. “They want the cookie, but they also feel bad. Even kids with severe A.D.H.D.: they may have poor impulse control, but they still feel bad when they realize that their mom is mad at them.” Callous-unemotional children are unrepentant. “They don’t care if someone is mad at them,” Frick says. “They don’t care if they hurt someone’s feelings.” Like adult psychopaths, they can seem to lack humanity. “If they can get what they want without being cruel, that’s often easier,” Frick observes. “But at the end of the day, they’ll do whatever works best.”

Link: Post-Prozac Nation

The science and history of treating depression.

Few medicines, in the history of pharmaceuticals, have been greeted with as much exultation as a green-and-white pill containing 20 milligrams of fluoxetine hydrochloride — the chemical we know as Prozac. In her 1994 book “Prozac Nation,” Elizabeth Wurtzel wrote of a nearly transcendental experience on the drug. Before she began treatment with antidepressants, she was living in “a computer program of total negativity … an absence of affect, absence of feeling, absence of response, absence of interest.” She floated from one “suicidal reverie” to the next. Yet, just a few weeks after starting Prozac, her life was transformed. “One morning I woke up and really did want to live… . It was as if the miasma of depression had lifted off me, in the same way that the fog in San Francisco rises as the day wears on. Was it the Prozac? No doubt.”

Like Wurtzel, millions of Americans embraced antidepressants. In 1988, a year after the Food and Drug Administration approved Prozac, 2,469,000 prescriptions for it were dispensed in America. By 2002, that number had risen to 33,320,000. By 2008, antidepressants were the third-most-common prescription drug taken in America.

Fast forward to 2012 and the same antidepressants that inspired such enthusiasm have become the new villains of modern psychopharmacology — overhyped, overprescribed chemicals, symptomatic of a pill-happy culture searching for quick fixes for complex mental problems. In “The Emperor’s New Drugs,” the psychologist Irving Kirsch asserted that antidepressants work no better than sugar pills and that the clinical effectiveness of the drugs is, largely, a myth. If the lodestone book of the 1990s was Peter Kramer’s near-ecstatic testimonial, “Listening to Prozac,” then the book of the 2000s is David Healy’s “Let Them Eat Prozac: The Unhealthy Relationship Between the Pharmaceutical Industry and Depression.”

In fact, the very theory for how these drugs work has been called into question. Nerve cells — neurons — talk to one another through chemical signals called neurotransmitters, which come in a variety of forms, like serotonin, dopamine and norepinephrine. For decades, a central theory in psychiatry has been that antidepressants worked by raising serotonin levels in the brain. In depressed brains, the serotonin signal had somehow been “weakened” because of a chemical imbalance in neurotransmitters. Prozac and Paxil were thought to increase serotonin levels, thereby strengthening the signals between nerve cells — as if a megaphone had been inserted in the middle.

But this theory has been widely criticized. In The New York Review of Books, Marcia Angell, a former editor of The New England Journal of Medicine, wrote: “After decades of trying to prove [the chemical-imbalance theory], researchers have still come up empty-handed.” Jonathan Rottenberg, writing in Psychology Today, skewered the idea thus: “As a scientific venture, the theory that low serotonin causes depression appears to be on the verge of collapse. This is as it should be; the nature of science is ultimately to be self-correcting. Ideas must yield before evidence.”

Is the “serotonin hypothesis” of depression really dead? Have we spent nearly 40 years heading down one path only to find ourselves no closer to answering the question how and why we become depressed? Must we now start from scratch and find a new theory for depression?