Sunshine Recorder

Link: The Third Wave of Therapy

What’s the best form of psychotherapy? How can you overcome sadness? Controversial psychologist Steven Hayes has an answer: embrace the pain.

Before he was an accomplished psychologist, Steven Hayes was a mental patient. His first panic attack came on suddenly, in 1978, as he sat in a psychology-department meeting at the University of North Carolina at Greensboro, where he was an assistant professor. The meeting had turned into one of those icy personal and philosophical debates common on campuses, but when Hayes tried to make a point, he couldn’t speak. As everyone turned to him, his mouth could only open and close wordlessly, as though it were a broken toy. His heart raced, and he thought he might be having a heart attack. He was 29.

Eventually the attack subsided, but a week later he endured a similar episode in another meeting. Over the next two years, the panic attacks grew more frequent. Overwhelming feelings of anxiety colonized more and more of his life’s terrain. By 1980, Hayes could lecture only with great difficulty, and he virtually never rode in an elevator, walked into a movie theater or ate in a restaurant. Because he couldn’t teach much, he would often show films in his classes, and his hands would shake so badly that he could barely get the 8-mm film into the projector. As a student, he had earned his way from modest programs at colleges in California and West Virginia to an internship at Brown Medical School with esteemed psychologist David Barlow. Hayes had hoped to be a full professor by his early 30s, but what had been a promising career stalled.

Today Hayes, who turned 57 in August, hasn’t had a panic attack in a decade, and he is at the top of his field. A past president of the distinguished Association for Behavioral and Cognitive Therapies, he has written or co-written some 300 peer-reviewed articles and 27 books. Few psychologists are so well published. His most recent book, which he wrote with the help of author Spencer Smith, carries the grating self-help title Get Out of Your Mind & Into Your Life (New Harbinger Publications; 207 pages). But the book, which has helped thrust Hayes into a bitter debate in psychology, takes two highly unusual turns for a self-help manual: it says at the outset that its advice cannot cure the reader’s pain (the first sentence is “People suffer”), and it advises sufferers not to fight negative feelings but to accept them as part of life. Happiness, the book says, is not normal.

If Hayes is correct, the way most of us think about psychology is wrong. In the years since Hayes suffered his first panic attacks, an approach called cognitive therapy has become the gold-standard treatment (with or without supplementary drugs) for a wide range of mental illnesses, from depression to post-traumatic stress disorder. And although a good cognitive therapist would never advise a panic patient merely to try to will away his anxiety, the main long-term strategy of cognitive therapy is to attack and ultimately change negative thoughts and beliefs rather than accept them. “I always screw up at work,” you might think. Or “Everyone’s looking at my fat stomach” or “I can’t go to that meeting without having a drink.” Part mentor, part coach, part scold, the cognitive therapist questions such beliefs: Do you really screw up at work all the time, or like most people, do you excel sometimes and fail sometimes? Is everyone really looking at your stomach, or are you overgeneralizing about the way people see you? The idea is that the therapist will help the patient develop new, more realistic beliefs.

But Hayes and other top researchers, especially Marsha Linehan and Robert Kohlenberg at the University of Washington in Seattle and Zindel Segal at the University of Toronto, are focusing less on how to manipulate the content of our thoughts and more on how to change their context-to modify the way we see thoughts and feelings so they can’t push us around and control our behavior. Segal calls that process disidentifying with thoughts-seeing them not as who we are but as mere reactions. You think people always look at your stomach? Maybe so. Maybe it’s huge. Maybe they don’t; many of us are just hard on ourselves. But Hayes and like-minded therapists don’t try to prove or disprove such thoughts. Whereas cognitive therapists speak of “cognitive errors” and “distorted interpretations,” Hayes and the others teach mindfulness, the meditation-inspired practice of observing thoughts without getting entangled in them, approaching them as though they were leaves floating down a stream (“… I want coffee/I should work out/I’m depressed/We need milk …”). Hayes is the most divisive and ambitious of the third-wave psychologists-so called because they are turning from the second wave of cognitive therapy, which itself largely subsumed the first wave of behavior therapy, devised in part by B.F. Skinner. (Behavior therapy, in turn, broke with the Freudian model by emphasizing observable behaviors over hidden meanings and feelings.)

Hayes and other third wavers say trying to correct negative thoughts can, paradoxically, intensify them, in the same way that a dieter who keeps telling himself “I really don’t want the pizza” ends up obsessing about … pizza. Rather, Hayes and the roughly 12,000 students and professionals who have been trained in his formal psychotherapy, which is called acceptance and commitment therapy (ACT), say we should acknowledge that negative thoughts recur throughout life. Instead of challenging them, Hayes says, we should concentrate on identifying and committing to our values. Once we become willing to feel negative emotions, he argues, we will find it easier to figure out what life should be about and get on with it. That’s easier said than done, of course, but his point is that it’s hard to think about the big things when we’re trying so hard to regulate our thinking.

The cognitive model permeates the culture so thoroughly that many of us don’t think to name it; it’s just what psychologists do. When Phillip McGraw (“Dr. Phil”) gives advice, for instance, much of it flows from a cognitive perspective. “Are you actively creating a toxic environment for yourself?” he asks on his website. “Or are the messages that you send yourself characterized by a rational and productive optimism?” Cognitive approaches were first developed in the 1950s and early ’60s by two researchers working independently, University of Pennsylvania psychiatrist Aaron Beck, now 84, and Albert Ellis, 92, a New York City psychologist. The therapy’s ascendance was rapid, particularly in the academy. Although many therapists still practice an evolved form of Freudian analysis called psychodynamic therapy, it’s difficult to find a therapist trained in the past 15 years who didn’t at least learn the cognitive model.

The debates between cognitive therapists and third-wave critics are sometimes arcane and petty, but few questions seem as elemental to psychology as whether we can accept interior torment or analyze our way out of it. Hayes was received at last year’s Association for Behavioral and Cognitive Therapies convention in Washington with reverence-and revulsion. It wasn’t uncommon to see therapists gazing at him between presentations as though he were Yoda. (Hayes is given to numinous proclamations: “I see this acceptance conception, this mindfulness conception, as having the power to change the world.”) But skeptics dog him everywhere. “He certainly has a following and even an entourage,” says Providence College psychology professor Michael Spiegler. “But I do think some of what he does is cultlike in terms of having that kind of following, of having to agree wholeheartedly with it, or if you don’t, you don’t get it.”



When you just read that word, no event occurred other than that your eyes moved across the page. But your mind may have raced off in any number of directions. Perhaps you thought of a beautiful sunset. And then maybe you thought of the beautiful sunset on the day your mother died, which might have evoked sadness.

Hayes uses such exercises to make the point that our thoughts can have unexpected consequences. Get Out of Your Mind & Into Your Life illustrates that unreliability by quoting a 1998 Psychological Science study in which 84 subjects were asked to hold a pendulum steady. Some were told not only to hold it steady but also not to move the pendulum sideways. But the latter group tended to move the pendulum sideways more often than the group told merely to keep it steady. Why? “Because thinking about not having it move [sideways] activates the very muscles that move it that way,” Hayes and Smith write. To be sure, cognitive therapy doesn’t ask people to suppress negative thoughts, but it does ask us to challenge them, to fix them.

By contrast, ACT tries to defuse the power of thoughts. Instead of saying “I’m depressed,” it proposes saying “I’m having the thought that I’m depressed.” Hayes isn’t saying people don’t really feel pain (he has felt plenty of it), but he believes we turn pain into suffering when we try to push it away. ACT therapists use metaphors to explain acceptance: Is it easier to drag a heavy weight on a chain behind you or to pick it up and walk with it held close?

The commitment part of acceptance and commitment therapy-living according to your values-sounds weightless at first. Many people are so depressed or lonely or caught up in daily life that they aren’t sure what their values are. ACT therapists help you identify them with techniques like having you write your epitaph. They also ask you to verbalize your definition of being a good parent or a good worker. The therapist helps you think about what kind of things you want to learn before you die, how you want to spend your weekends, how you want to explore your faith. The point isn’t to fill your calendar with Italian lessons and fishing trips but to recognize that, for instance, you like to fish because it means you spend time with your family or in the mountains or alone-“whatever is in fishing for you,” says Hayes. One task in Get Out of Your Mind asks you to give yourself a score of 1 to 10 each week for 16 weeks to show how closely your everyday actions comport with your values. If you really enjoy skiing with friends but end up watching TV alone every weekend, you get a 1. (But if you really love holing up with reruns of The O.C., go for it; ACT is pretty nonjudgmental.)

Now seems like a good time to stipulate that all this can sound vacuous and gaggingly self-helpy. But the scientific research on ACT has shown remarkable results so far. In the January edition of the journal Behaviour Research and Therapy, Hayes and four co-authors summarize 13 trials that compared ACT’S effectiveness to that of other treatments after as long as a year. In 12 of the 13, ACT outperformed the other approaches. In two of the studies, depressed patients were randomly assigned to either cognitive therapy or ACT. After two months, the ACT patients scored an average of 59% lower on a depression scale. Those were small studies, just 39 patients total, but ACT has shown wide applicability. In a 2002 study, Hayes and a student looked at 70 hospitalized psychotics receiving the standard medication and counseling. Half were randomly assigned to four 45-min. ACT sessions; the other half formed the control. Four months later, the ACT patients had to be rehospitalized 50% less often. They actually admitted to more hallucinations than those in standard care, but ACT had reduced the believability of their hallucinations, which were now viewed more dispassionately. Hayes likes to say ACT effectively turned “I’m the Queen of Sheba” into “I’m having the thought that I’m the Queen of Sheba.” The psychotics still heard voices; they just didn’t act on them as much. They learned to hold their thoughts more lightly, increasing their psychological flexibility.

ACT has also shown promise in treating addiction. In one study, drug addicts reported less drug use with ACT than with a 12-step program. And ACT worked better than a nicotine patch for 67 smokers trying to quit. ACT encourages addicts to accept the urge to do drugs and the pain that will come when they stop-and then to work on figuring out what life means beyond getting high. ACT has also been used to help chronic-pain patients get back to their jobs faster. But perhaps the most noteworthy finding was that 27 institutionalized South African epileptics who had just nine hours of ACT in 2004 experienced significantly fewer and shorter seizures than those in a placebo treatment in which the therapist offered a supportive ear. Even Hayes, who is not usually overburdened with modesty, was startled by that finding. He could only hypothesize about why ACT might reduce seizures: “You teach people to walk right up to the moment they seize and watch it.” Somehow, he suggests, that helps reduce biochemical arousal in those critical moments before the trigger of a seizure.

Obviously, Hayes isn’t sure exactly how ACT is working in all those cases, but he believes it has something to do with learning to see our struggles-even seizures-as integral and valid parts of our lives. Recently, a San Francisco patient in ACT therapy e-mailed a plea for help to Hayes. “Just HOW I do that (live a valued, meaningful life) in the midst of disabling and oppressive private experience (anxiety, depression, lack of energy, inertia) is not clear to me. Does one just say the hell with it I will CHOOSE to live, to get into the life I value despite feeling awful 24 hours a day??”

Hayes had opened the e-mail at 3 a.m., after his newborn’s cries had awakened him. At 4:04, he sent a long response that said, in part, “You are asking, ‘Can I live a valued life, even with my pain?’ Let me ask you a different question. What if you can’t have the second without the first? What if to care the way you do care, means you will hurt. But not the heavy, stinky, evaluated, categorized, and predicted hurt that has crushed you. Rather the open, clear, knife-through-butter pain that comes from a mortal being who eventually will lose all and yet who cares.

"Imagine a universe in which your feelings, thoughts, and memories are not your enemy. They are your history brought into the current context, and your own history is not your enemy."

Read more.

Link: Towards a Philosophy of Depression

The experience of depression has been known since classical times. Yet a deeper understanding still eludes us. Neuroscience might be leading the charge now, but philosophy is still in the race. 

To describe the indescribable: that’s how philosopher Matthew Ratcliffe of Durhum University sees his task. He’s made it a career goal to shine light into bleak corners, and perhaps gift some valuable insights into the treatment of depression.  

It began with Ratcliffe’s sense that the medical model falters with the enormity of the experience.

‘I had been working on philosophy of psychiatry for some time and in this context I started reading first person memoirs of depression,’ he says. ‘Some things struck me: one was that people weren’t just describing feeling less happy than usual, more miserable and more tired, but were relating a radical transformation between themselves and other people. In addition, in almost every memoir you find the complaint that depression is utterly indescribable.’

It’s not as if psychiatry has not given its best efforts to come to terms with the symptoms of depression. The latest iteration of controversial guidebook The Diagnostic and Statistical Manual of Mental Disorders does its best to survey the known boundaries of the experience. For Ratcliffe, however, something of elemental importance is missing: meaningful description of the symptoms.

It’s this lacuna of qualitative data that motivates him to look for clues beyond the clinical setting. Art is fertile terrain; Ratcliffe is ever attentive to the observations of established writers and artists who have fallen into the pit, as often they can describe the experience with excoriating accuracy.

American novelist William Styron is one such craftsman and excerpts of his work crop up in Ratcliffe’s professional papers. In Darkness Visible the renowned American author has penned an unflinching exploration of his despair.

‘The gray drizzle of horror induced by depression takes on the quality of physical pain,’ he writes.

‘My brain had begun to endure its familiar siege: panic and dislocation, and a sense that my thought processes were being engulfed by a toxic and unnameable tide that obliterated any enjoyable response to the living world.

For Ratcliffe, Styron represents the literary tip of a deeply submerged iceberg. His concern is that when you look closely at the detail of the depressed experience, a remarkable range of subtle differences come into view, and they just aren’t captured in the traditional, medical model.

As it stands, he sees our understanding of depression as ‘perilously thin’. Moreover, if depression cannot be identified without appealing to its symptoms, then a diabolical regress is complete: a serious malady remains darkly visible.

‘The inability for others to understand makes it all the more painful,’ laments Ratcliffe. ‘This lack of understanding isn’t really separable from the painfulness of depression and from a sense of isolation already so central to this condition.’

Here, Ratcliffe’s speciality comes into focus: a branch of contemporary philosophy known as phenomenology. In phenomenology, experience matters: the real life human experience of being in the world. Founder Edmund Husserl pioneered an analysis which takes emotions, perceptions and judgements seriously.

Husserl’s hunch was that attending to consciousness and its immersion in the world could unveil some truth or truths about the human condition, and possibly something more precise about the very nature of reality. It’s big stuff, and for Ratcliffe it’s extraordinarily important work when applied to maladaptive conditions such as depression.

Ratcliffe’s interests in philosophy bear the deep influences of this contemporary European movement, as demonstrated by his award winning writing on the importance of touch, in which he follows Aristotle’s lead on a vital but poorly understood human sense. For Ratcliffe, how we inhabit the world is central to philosophical enquiry.

Husserl, Merleau Ponty and Heidegger are three stellar names of the field. They had their disagreements, but they all homed in on this act of being in the world. Ratcliffe recognises this as thoroughly germane to the task of lassoing the ogre.

‘When we reflect on experience there is an important aspect that’s overlooked … the sense of belonging to the world, already finding oneself in the world; the pervasive experiential background that frames all of our experiences and thought. It’s so basic it’s overlooked.  You don’t really glimpse it unless it wobbles.’

It’s when the world gives way under your feet that valuable insights might be released.

‘The world in phenomenology coincides remarkably well with what you might call the world of depression,’ says Ratcliffe. ‘So many people who suffer depression describe living in an utterly alone world divorced from consensus reality.’

‘It’s so utterly unfamiliar that it’s an aspect of experience they wouldn’t have been aware of even unless it had happened to them. It is part of our experience that we do not have an awareness of, and that awareness can be brought about by engaging in phenomenology.’

Related: Les Murray- living with the black dog

So how can Heidegger, that other giant of the field, help? Well, as Ratcliffe wryly quips, the controversial German thinker can’t help directly as his work can be bedevilling; it‘s no self-help manual. Nevertheless, his classic tome Being and Time does offer a way of unfurling the fabric of despair. Through attempting a comprehensive portrait of existence through time, Heidegger displays a bigger cosmic backdrop against which the shreds of individual despair can be read.

Matthew Ratcliffe’s pioneering work has been picked up by Australia doctoral candidate Emily Hughes. She too became concerned with a medical model seemingly closed off to the full extent of the experience.

‘[Depression] is thought of as a dark and harmful sort of suffering, one that happens on the interior psyche of the subject, and I think that it’s restrictive to confine it to that. I started to become concerned that the focus has come down to a disorder of the brain.’

Hughes treads on that ever-so-contested ground of the physical basis of mental disturbance. She knows that the intellectual stakes are high, but feels strongly that it’s worth the fight.

‘How do you answer the question when someone says, “I feel empty inside, I feel lost, I feel like nothing means anything, that nothing matters.” Thinking in terms of the medical model with disease and illness and treatment is not actually answering people’s suffering and engaging with what their suffering actually means to them. I think that’s a real problem that needs to be addressed and I think philosophy is one way we can do that.’

Back to Heidegger: Hughes’ dissertation is steeped in Heidegerrian notions of mood. She follows a similar route to Ratcliffe in attempting to widen the lens on the experience of despair.

‘For Heidegger, moods reflect the fact that we are always in the world, immersed—always thrown into the world. What things show up as meaningful and meaningless, what possibilities and projects seem enticing or frightening, that is all determined by how we are disposed through moods. For Heidegger, moods reflect our thrownness; they come before any conceptual reflection or thinking. They are our fundamental way of being.’

It’s a known, everyday kind of word, but in Heidegger’s hands ‘mood’ becomes a specific tool to prise open this ‘thrown’ existence.

‘Heidegger has a stratified view of moods,’ says Hughes. ‘His classic distinction in Being and Time is between fear and anxiety. Fear is a fallen mood. It’s a mood that is entangled with the world. It’s a response to a specific threat. The example that is often used is of a frightening dog encountered on a walk. The cause is something in the world.  For Heidegger, anxiety, in contrast to fear, is a ground mood.’

Unlike the snarling dog’s obvious intentions, ground moods don’t announce themselves. They just are, like anxiety, or boredom. According to Hughes, they are essential to a fuller understanding of the human experience, especially when it appears to malfunction.

‘What happens with a ground mood like anxiety is that it turns us away from the world… we’re withdrawn from the world into what Heidegger calls the nothing. It’s in that groundlessness that Heidegger says we are confronted by the issue of our existence.  We are forced to confront what it means to be human.’

Hughes’ intricate analysis weaves in Heidegger’s other main preoccupation: time and its place in existence. She sees it as one of the most important ideas to arise from phenomenological work. More importantly, she sees it as a crucial opportunity to put what appears to be inaccessible deep theory into tangible everyday practice.

‘You can open up a dialogue between philosophy and psychiatry through looking at the ways melancholic depression radically modifies time. ‘

Time is often reported to be warped in the experience of depression.  As Hughes observes, often temporal displacement can be exacerbated by medical interventions such as drugs and electroconvulsive therapies. It is here that the cross-disciplinary conversation can get going.

‘We could look at questions around feeling numb, being spaced out, detached or very fatigued, which are all very common experiences of people on psychoactive medicines,’ says Hughes. ‘If we can start reengaging with these questions we can start to think about what depression means and about how we experience our existence, and the breakdown of our situatedness in the world, which is what happens in anxiety, and in profound boredom, and I want to argue also happens in melancholia and despair. ‘

There is no doubt that neuroscience is adding valuable knowledge to our understanding of depression, but philosophy, in the hands of practitioners like Ratcliffe and Hughes, could expand our thinking on a malady so far refusing to reveal its darker secrets.

Link: Death and the Maiden

Freud’s theory of the death drive also gives us a way to think about gender.

Walter Benjamin remarked of the people who experienced the First World War:

A generation that had gone to school in horse-drawn streetcars now stood in the open air, amid a landscape in which nothing was the same except the clouds, and at its center, in a forcefield of destructive torrents and explosions, a tiny fragile human body.

What this body could mean was newly in question. Benjamin discusses economic depression, technological innovation, moral uncertainty, and violence, but the First World War also provoked a crisis of masculinity. Men died, were wounded, and later found themselves unemployed in unprecedented numbers. Meanwhile women, as Sarah M Gilbert and Susan Gubar argue in No Man’s Land, “seemed to become, as if by some uncanny swing of history’s pendulum, even more powerful.” Tiny fragile human bodies threatened to detach themselves from their traditionally assigned gender roles. At this historical moment, death collided with gender.

Confronted with a profusion of patients shaken by traumatic dreams in the wake of World War I, Sigmund Freud had a theoretical as well as therapeutic problem. He had previously asserted that every dream is the fulfillment of a wish, but the repetition he encountered in traumatic dreams contradicted this claim. In Beyond the Pleasure Principle (1920) he asked, Why repeat something unpleasurable? Why return to the site of trauma?

To resolve this problem, Freud returned to the mist-enveloped beginnings of life itself. There is a “death instinct” that “was brought into being by the coming to life of inorganic substance,” he wrote. Death is not an event but a state; death is inorganic nature. Life arose from this ­inert primordial condition and its instinct is to return there. Freud is well aware how weird and implausible this sounds, admitting that even he is not convinced by his own eccentric argument.

It may have been that he was trying to resolve two problems at once. Corpses return to inorganic nature, but the mangled war dead returned in dreams. Jagged fragments of memory piercing the flesh of the present, these undead apparitions and the dreamers they haunted were overwhelmingly male. Masculinity smashed to smithereens—torn, limping, fractured, dismembered. Shrapnel embedded in living tissue. When death coexists with life it is not unity but mutilation. Freud looked further back in time than many, but the conservative impulse to restore a previous state of things in the wake of war was widespread.

Female psychoanalysts writing in the interwar years outlined a phenomenon they described as the “masculinization of women.” “We see patients rejecting their female functions,” reported Karen Horney in 1926. Horney likened woman’s resentfully subordinated relationship to men to a worker’s relationship to the boss. She claimed that her female patients often dreamed of castrating their fathers or husbands, simultaneously seeing themselves as “mutilated, injured, or wounded” men. Paradoxically, these women wanted both to destroy and to become men.

Writing on death, Freud does not directly confront the shattering experience of war which forces him to take a peculiar detour into prehistory. He is equally silent on the subject of gender. But his vision of death potentially jeopardizes conventional psychoanalytic understandings of masculinity and femininity—death recognizes no gender distinctions. Freud imagined inorganic nature as prior to life, but his understanding of the death drive is laced with the repressed anxieties about gender that animated interwar discourse. What if his theory is turned on its head? What if inorganic nature, free from gender distinction but now in coexistence with (gendered) life, lay in the present and not in the past? What if the war had killed gender itself stone dead?

Fort/da. Let’s start again.

In the beginning Freud created the heaven and the earth. And the earth was without form and void, and darkness was upon the face of the deep. And the Spirit of Freud moved upon the face of the waters. And Freud said, “Let there be life,” and there was life.

This is how Freud introduces his concept of the death drive inBeyond the Pleasure Principle: “The attributes of life were at some time evoked in inanimate matter by the action of a force whose nature we can form no conception… The tension which then arose in what had hitherto been an inanimate substance endeavored to cancel itself out. In this way the first instinct came into being: the instinct to return to the inanimate state.”

Before life there was death. Freud doesn’t go into the particulars of this lifeless universe. We might imagine the solitary earth spinning through lifeless galaxies—cliff ­faces, chunks of ice, mud flats, stalactites, deserted beaches, barren hillsides, boulders, unmined clusters of twinkling sapphire and ruby, perhaps a river or the occasional pool of lava. Ashes and ashes; dust and dust. Perhaps a black obelisk throbs ominously in the desert. Who knows. Freud certainly doesn’t care. This terrestrial fantasy is already too concrete, dynamic and differentiated. For Freud, the beginning of life is really the beginning of time as such.

When life finally wriggles up from the dirt to inaugurate history it is barely distinguishable from its inorganic surroundings. Freud imagines a tiny cell, “a little fragment of living substance … suspended in the middle of an external world charged with the most powerful energies.” To protect itself from the violent onslaught of the world, this lonely scrap of life forms a protective shield. It wishes to die only its proper, “natural” death and will therefore go to great lengths to avoid perishing at the hands of hostile external forces. To survive attempts on its life, the fragile organism coats itself in a layer of death—“its outermost surface ­ceases to have the structure proper to living matter.” The organic dons the mask of the inorganic. As more complex life-forms evolve, this surface layer is internalized but the primal deathliness remains.

In 1929, Joan Riviere, a British psychoanalyst, described the process by which women who transgressed the confines of gender expectation in the workplace often responded by donning “a mask of womanliness to avert anxiety and retribution from men.” We might align this masked woman to Freud’s tiny cell—to protect itself from the violent onslaught of the world, this lonely scrap of life forms a protective shield—“its outermost surface ceases to have the structure proper to living matter.” The organic dons the mask of the inorganic.

Riviere’s discussion of femininity as masquerade understands gender as semblance. An inorganic costume is required to simulate the supposedly organic gender differences the war had torn to shreds. As the boundaries separating the masculine from the feminine are wearing away in the social realm, they must be more rigidly upheld through the performance of ideal norms. Freud suggests that the inorganic veneer is genderless. But for Riviere, donning a mask of femininity does not eradicate the notion of sexual difference, it consolidates it.

Horney and Riviere cling stubbornly a world carved up into gendered halves—man/woman, feminine/masculine, male/female—the words are repeated and recombined insistently, but to what do they refer? They shuttle wildly between abstract and concrete. At times gender seems to inhere in bodies and at others only adhere to them. Something spills over, refuses to be contained.

By insisting that the binary between masculinity and femininity has a genital correlate, Horney and Riviere are resigned to assisting their patients to function within the prevailing norms of society. Healthy women must come to terms with their lack of a penis, which these psychoanalysts still insist defines them psychologically. But the “masculinized woman” is more explosive than they allow her [him, it, them] to be.

Horney and Riviere still treat gender difference as a point of origin. But Freud looked further back in time. He speculates that in the beginning everything was united. Here there were no gender distinctions—there were no distinctions at all. The first thing the Oxford English Dictionary tells us about the inorganic is that it is “not characterized by having organs or members fitted for special functions.”

The emergence of life represented a violent break with this original unity. “Splintered fragments of living substance” yearned to be whole again. This is where Freud situates the origins of the sexual instincts or Eros, which strive to draw together what the rupture from the inorganic tore apart. Freud considers that the opinion Plato ascribes to Aristophanes in the Symposium might have been correct: bodies were not originally gendered male or female.

But the real insight of Beyond the Pleasure Principle is that death and life are contemporaries. Like a bullet piercing the flesh of the present, inorganic nature has a revolutionary charge – not an uncontaminated then but a hybrid, technologized now. During the interwar years, mass-­produced commodities marketed to a new kind of female ­producer-consumer proliferated—new perfumes with chemical bases that bore no resemblance to the fragrance of flowers, sleek rayon stockings, gaudy lipstick—synthetic masks of womanliness appropriate to an emerging synthetic reality. Life coated in a layer of death. Even at her most “feminine” she [he, it, they] is inorganic.

Read More

Link: Do others judge us as harshly as we think? Overestimating the impact of our failures, shortcomings, and mishaps.

Abstract: When people suffer an embarrassing blunder, social mishap, or public failure, they often feel that their image has been severely tarnished in the eyes of others. Four studies demonstrate that these fears are commonly exaggerated. Actors who imagined committing one of several social blunders (Study 1), who experienced a public intellectual failure (Studies 2 and 3), or who were described in an embarrassing way (Study 4) anticipated being judged more harshly by others than they actually were. These exaggerated fears were produced, in part, by the actors’ tendency to be inordinately focused on their misfortunes and their resulting failure to consider the wider range of situational factors that tend to moderate onlookers’ impressions. Discussion focuses on additional mechanisms that may contribute to overly pessimistic expectations as well as the role of such expectations in producing unnecessary social anxiety.

Link: Can Classic Moral Stories Promote Honesty in Children ?

The classic moral stories have been used extensively to teach children about the consequences of lying and the virtue of honesty. Despite their widespread use, there is no evidence whether these stories actually promote honesty in children. This study compared the effectiveness of four classic moral stories in promoting honesty in 3-to 7-year-olds. Surprisingly, the stories of “Pinocchio” and “The Boy Who Cried Wolf” failed to reduce lying in children. In contrast, the apocryphal story of “George Washington and the Cherry Tree” significantly increased truth telling. Further results suggest that the reason for the difference in honesty-promoting effectiveness between the “George Washington” story and the other stories wa s that the former emphasizes the positive consequences of honesty, whereas the latter focus on the negative consequences of dishonesty. When the “George Washington” story was altered to focus on the negative consequences of dishonesty, it too failed to promote honesty in children.

Link: On the Madness and Charm of Crushes

Crushes: they happen to some people often and to almost everyone sometimes. Airports, trains, streets, conferences – the dynamics of modern life are forever throwing us into fleeting contact with strangers, from amongst whom we pick out a few examples who seem to us not merely interesting, but more powerfully, the solution to our lives. This phenomenon – the crush – goes to the heart of the modern understanding of love. It could seem like a small incident, essentially comic and occasionally farcical. It may look like a minor planet in the constellation of love, but it is in fact the underlying secret central sun around which our notions of the romantic revolve.

A crush represents in pure and perfect form the dynamics of romantic philosophy: the explosive interaction of limited knowledge, outward obstacles to further discovery – and boundless hope.

The crush reveals how willing we are to allow details to suggest a whole. We allow the arch of someone’s eyebrow to suggest a personality. We take the way a person puts more weight on their right leg as they stand listening to a colleague as an indication of a witty independence of mind. Or their way of lowering their head seems proof of a complex shyness and sensitivity. From a few cues only, you anticipate years of happiness, buoyed by profound mutual sympathy. They will fully grasp that you love your mother even though you don’t get on well with her; that you are hard-working, even though you appear to be distracted; that you are hurt rather than angry. The parts of your character that confuse and puzzle others will at last find a soothing, wise, complex soulmate.

In elaborating a whole personality from a few small – but hugely evocative – details, we are doing for the inner character of a person what our eyes naturally do with the sketch of a face.

We don’t see this as a picture of someone who has no nostrils, eight strands of hair and no eyelashes. Without even noticing that we are doing it, we fill in the missing parts. Our brains are primed to take tiny visual hints and construct entire figures from them – and we do the same when it comes to character. We are – much more than we give ourselves credit for – inveterate artists of elaboration. We have evolved to be ready to make quick decisions about people (to trust or withhold, to fight or embrace, to share or deny) on the basis of very limited evidence – the way someone looks at us, how they stand, a twitch of the lips, a slight movement of the shoulder – and we bring this ingenious but fateful talent to situations of love as much to those of danger.

The cynical voice wants to declare that these enthusiastic imaginings at the conference or on the train, in the street or in the supermarket, are just delusional; that we simply project a false, completely imaginary idea of identity onto an innocent stranger. But this is too sweeping. We may be right. The wry posture may really belong to someone with a great line in scepticism; the head tilter may be unusually generous to the foibles of others. The error of the crush is more subtle, it lies in how easily we move from spotting a range of genuinely fine traits of character to settling on a recklessly naive romantic conclusion: that the other across the train aisle or pavement constitutes a complete answer to our inner needs.

The primary error of the crush lies in overlooking a central fact about people in general, not merely this or that example, but the species as a whole: that everyone has something very substantially wrong with them once their characters are fully known, something so wrong as to make an eventual mockery of the unlimited rapture unleashed by the crush. We can’t yet know what the problems will be, but we can and should be certain that they are there, lurking somewhere behind the facade, waiting for time to unfurl them.

How can one be so sure? Because the facts of life have deformed all of our natures. No one among us has come through unscathed. There is too much to fear: mortality, loss, dependency, abandonment, ruin, humiliation, subjection. We are, all of us, desperately fragile, ill-equipped to meet with the challenges to our mental integrity: we lack courage, preparation, confidence, intelligence. We don’t have the right role models, we were (necessarily) imperfectly parented, we fight rather than explain, we nag rather than teach, we fret instead of analysing our worries, we have a precarious sense of security, we can’t understand either ourselves or others well enough, we don’t have an appetite for the truth and suffer a fatal weakness for flattering denials. The chances of a perfectly good human emerging from the perilous facts of life are non-existent. Our fears and our frailties play themselves out in a thousand ways, they can make us defensive or aggressive, grandiose or hesitant, clingy or avoidant – but we can be sure that they will make everyone much less than perfect and at moments, extremely hard to live with.

We don’t have to know someone in any way before knowing this about them. Naturally, their particular way of being flawed and (consequently very annoying) will not be visually apparent and may be concealed for quite long periods. If we only encounter another person in a fairly limited range of situations (a train journey, rather than when they are trying to get a toddler into a car seat; a conference, rather than 87 minutes into a shopping trip with their elderly father) we may, for a very long time indeed (especially if we are left alone to convert our enthusiasm into an obsession because they don’t call us back or are playing it cool), have the pleasure of believing we have landed upon an angel.

A mature person thinks, not, ‘There’s nothing good here’, but rather ‘The genuinely good things will – inevitably – come mixed up with really terrible things’

Maturity doesn’t suggest we give up on crushes. Merely that we definitively give up on the founding romantic idea upon which the Western understanding of relationships and marriage has been based for the past 250 years: that a perfect being exists who can solve all our needs and satisfy our yearnings. We need to swap the Romantic view for the Tragic Awareness of Love, which states that every human can be guaranteed to frustrate, anger, annoy, madden and disappoint us – and we will (without any malice) do the same to them. There can be no end to our sense of emptiness and incompleteness. This is a truth chiselled indelibly into the script of life. Choosing who to marry or commit ourselves to is therefore merely a case of identifying which particular variety of suffering we would most like to sacrifice ourselves for, rather than an occasion miraculously to escape from grief.

We should enjoy our crushes. A crush teaches us about qualities we admire and need to have more of in our lives. The person on the train really does have an extremely beguiling air of self-deprecation in their eyes. The person glimpsed by the fresh fruit counter really does promise to be a gentle and excellent parent. But these characters will, just as importantly, also be sure to ruin our lives in key ways, as all those we love will.

A caustic view of crushes shouldn’t depress us, merely relieve the excessive imaginative pressure that our romantic culture places upon long-term relationships. The failure of one particular partner to be the ideal Other is not – we should always understand – an argument against them; it is by no means a sign that the relationship deserves to fail or be upgraded. We have all necessarily, without being damned, ended up with that figure of our nightmares, ‘the wrong person.’

Romantic pessimism simply takes it for granted that one person should not be asked to be everything to another. With this truth accepted, we can look for ways to accommodate ourselves as gently and as kindly as we can to the awkward realities of life beside another fallen creature, for example, never feeling that we have to spend all of our time with them, being prepared for the disappointments of erotic life, not insisting on complete transparency, being ready to be maddened and to madden, making sure we are allowed to keep a vibrant independent social life and maintaining a clear-eyed refusal to act on sudden desires to run off with strangers on trains… A mature understanding of the madness of crushes turns out to be the best and perhaps the only solution to the tensions of long-term love.

Link: Spent? Capitalism’s growing problem with anxiety

In today’s turbo-charged and austerity-ravaged economy, anxiety and insecurity have become the new normal. How did this happen — and how do we fight back?

About six months ago, Moritz Erhardt, a 21-year-old intern for Bank of America Merrill Lynch in London, died after working for 72 hours straight without sleep. Journalists found a strange bravado among City workers, reflected in their tributes to a value-system of drive, resilience and regularly ‘pulling an all-nighter’ beyond all normal measures of exhaustion. That’s nothingAs one said, “On average, I get four hours’ sleep about 70% of the time … [but] there are also days with eight hours of sleep. … Work-life balance is bad. We all know this going in. I guess that’s the deal with most entry-level jobs these days.” Coupled to ambitions to succeed in careers scarcely worth the reward is a fatalism about expecting any change. That’s how it is.

This unflinching dedication to the job — indeed the job with the utmost virtue of wealth production — indicates a set of moral and social values increasingly used to describe both individual and national economies. On the one hand, productivity, growth, entrepreneurialism and drive are ‘virtues’ both of the effective individual and the expanding economy. By contrast, depression, crisis, zero-hour insecurity and burnout are used to describe both ‘failing’ economies and individuals who must work harder to perform. Whilst failing states are humiliatingly ‘bailed out’, usually under punitive conditions, individuals experience similar ‘interventions’ by more successful peers (and celebrities) on reality TV formats to increase their productive value through getting a job, looking sexier, or something similar. In each case, some internal failure (bloated public sector, childhood setback) is considered the cause of the ‘problem’ and remedied through external improvement of the individual.

Toxic Stress

A similar disengagement with reality occurred in the UK with the series of suicides and unofficial explosion in homelessness following the coalition government’s scorched-earth retreat on social spending and welfare. One man died by self-immolation in Bolton after harassment from debt-collectors became too much to bear. Yet in each case, the media narrative of individual self-isolation and appeals to ‘speak up’ in times of hardship ignores the common societal causes of these issues. It also reinforces an effective narrative that welfare, sick leave or social support should not be given to the ‘feckless’ and ‘undeserving’, but creates a culture of dependency (of which the reactionary ire over Channel 4′s Benefits Street is just the most recent example). This perverse rebranding of the relief of poverty is succeeding, with recorded social attitudes in the UK considerably hardening towards welfare recipients since 1997.

Reported rates of workplace stress, depression, and anxiety also correlate to worsening personal debt and public health problems like obesity and alcohol dependency. Though research remains undeveloped in this area (after all, what multinational or western government would fund such politically explosive material?), evidence from the World Health Organization (WHO), the USNational Research Council and Institute of Medicine, and the Joseph Rowntree Foundation together indicate clear links between poverty and clusters of mental and physical health problems. This is not to suggest that mental health or suicides have only an economic cause (a recent series of suicides by high-profile ‘burnt-out’ French workers would challenge this), but the poorest have fewer forms of social and economic support in difficult times, and less opportunities to change their circumstances, than those with university educations, more extensive social circles or affluent relatives. Obesity, diabetes, ‘toxic stress’, and many forms of cancer have such a clear link to poverty that these ought not to be considered as diseases of affluence, but conditions of poverty in the same way that rickets, tuberculosis and infant malnutrition were to the deprived and exploited labouring classes of the 19th century.

Mental health and homelessness charities are being overwhelmed by appeals from millions abandoned for the sake of economic recovery. Study the news for long enough and stories of self-immolation, suicide and death by overworking are by no means unique to the UK (in Japan they term the latter karoshi, whereas in the case of Moritz Erhardt, our coroners call it an entirely unrelated epileptic seizure). But what is innovative is the effective management of the reality presented to entirely remove any collective, public or social basis for these growing problems. Instead responsibility is attributed to the individual, who has either been unfortunate or ineffective at adapting to the world around them.

Generation F#cked

What I highlight are extreme situations, and my interest is more in the millions who continue to live in more disempowered and restricted circumstances. Behind such cases is a new normal of zero-hour contracts, working without payment (either internships at the top or ‘workfare’ at the bottom) and in states of stress and anxiety, as an increasing dependence on management thrives on sucking the remaining residues of performance from precarious workers. Living costs have sharply risen in rents and goods, while supermarkets, energy firms, landlords and financial traders have greedily increased their profits. For institutions of popular power, scandals like the undemocratic catastrophe of the Afghanistan and Iraq Wars, the exposure of GCHQ and NSA’s total surveillance of internet and telecommunications, mass fraud by MPs of expenses, the rises in university tuition fees and the removal of EMA, regular press phone-hacking, the exposure of Murdoch’s power over successive governments’ policy, routine police racism, unlawful spying of protest groups, or the unprosecuted murders of members of the public, and — as we have already forgotten — the failure to meaningfully punish anyone in the City for the bank collapses of 2008 should have, each, led to a crisis of political legitimacy in the UK. These fine props of the illusion of freedom and prosperity are weakened, yet remain for now stuck in place. The expense of such illusions is a grinding and unnecessary burden, felt by many occasionally and some increasingly often, a burden that for now is explained as the individual’s to carry.

There is a generational feature to this. Those who have grown up in a society transformed by the anti-social, economic Darwinism mantras of Margaret Thatcher have experienced an intensification of productivity in the most intimate aspects of personal life. Increasing and intensified school examinations at earlier ages, combined with regular media terror-tales of abducted children and random youth violence, alongside an aggressive marketing of leisure technologies towards children has created a more anxious, distracted, allergic, paranoid and restless generation than those prior. This comes with some mental toll, and another remarkable societal transformation is the frequency and normalisation of mental health disorders, particularly among young people.

For many, like myself, like those closest to me, anxiety and depression are not technical terms but personal experiences. It was only a year or so ago that I realised just how depressed I had been across my adult years. Continual tiredness, regular little ailments related to stress, an occasionally total mental paralysis, the silent conviction of being a fraud, and the anxieties each of these engender: I knew what my symptoms meant even then. Through a very fortunate change of circumstances, getting funding to do a PhD, I finally obtained financial stability and the chance to work towards an actual personal interest, and on my own terms. I’ve been lucky, though academia is less a lifeboat and more a ship of fools, steered by bloated Vice-Chancellors. Mental health problems and anxiety disorders are growing in academia, particularly among PhD students and postdoctoral researchers. But the problems of continual productivity, heavy teaching workloads, workplace bullying, casual sexism, poor or non-existent pay, no work-life balance, and of competing (never cooperating) as a high-impact entrepreneur of oneself, are each features of modern labour in capitalist workplaces. Stress is the cost of success. Insecurity is the new normal, as is the passive acceptance of such insecurity as some unfortunate but necessary stage to success.

In my case, a series of jobs that I had largely loved in the charity/voluntary sector had already familiarised me with these things. Free of the stress of seeking or holding down employment, or of trying to justify myself in competitive and insecure workplaces. Free in time I could actually spend on things of my own free choosing. I discovered that stress had acted like a perverted mental program since my early teens to work long and hard, independently, for an image of material and existential ‘success’ that no-one, in hindsight, can possibly experience. This program, one that prizes and rewards aggressively macho behaviours like competition, cunning and strength of will over cooperation, compassion and indifference, considers life as a game of winners and losers. The existential effect of such a worldview combines restless labouring for the next project, followed by the next, alongside a crushing and inexplicable self-loathing that inverts the neoliberal narcissism of reality gameshows like Big Brother, X Factor and The Apprentice into a nasty minor key.

Within this common self-loathing is the repressed sense that this is not right, that life should not be lived in this way. But, unable to join the dots and connect a sense of personal alienation to material circumstances, I followed the common social direction and put the blame on my own individual defects. Not any more. I suspect my own case is not unique. How is it that so many females my age I know well enough are suffering from mental health problems and accessing public or private treatments? From my work as a men’s suicide prevention campaign coordinator, I also know from conversations and research that depression and anxiety are probably experienced in equal measure by men, but who are far less likely to consider getting outside support beyond the off-licence. Is it really such a coincidence?

Anxiety Machines

These might all be conditions of modern life: rates of allergies like hayfever and eczema in the UK population have risen to 44% in 2010, whilst rates of depression have similarly soared. Rising recorded levels of these ailments may signal a greater awareness and ability to self-diagnose these conditions, one could argue; but this alone doesn’t sufficiently explain why anxiety disorders began rising first of all. Anxiety and fear are psychological marks of domination in all social structures, but a specific anxiety and fear emerges in financial capitalism through the accelerating demands and pressures of working and living in the neoliberal era. Greater insecurity in the workplace or school leads to an intensification of individual failure that is also manifested in the growing trend of bullying, which further reinforces the cycle of stress, depression and suicide. I think this insecurity is also expressed through the very media used to communicate and function in everyday life. By this I mean the intensification of information technologies into domestic and personal life, what Paul Virilio calls a ‘tele-present’ world. From home computing for leisure, to the internet, hand-held communication devices, and social networking sites, in the last two decades there has been an unprecedented intensification of technologies that continuously connect users to hyperactive news streams and a disembodied form of social interaction, whose psychosocial norms deserves deeper analysis.

Consider the panic of losing a mobile phone, of having no access at all to the internet, to one’s games, movies, photos, or common nodes of social interaction that we call our friends or followers. A power-cut, a burglary… would it be wrong to call them addictions? Yet we have neither selected this basis of social organisation, nor should we guiltily consider ourselves lucky first-worlders gorging gluttonously on the backs of the deprived billions. Whilst digitised technologies have abstracted and placed many cultural forms on a single homogeneous platform, personal technologies have the worker connected and potentially labouring at all hours in ways that operate, at minute level, the exchanges and processes that neoliberal capitalism requires to function. Against such a backdrop, our politicians, the public face of neoliberal capitalism, cajole us through fear and envy to keep up our duty as citizens: spend, borrow, buy, 24/7, 365 days a year, be it Christmas, Valentine’s or whatever, one must never shirk in one’s duty to service the economy.

The medical establishment has also transformed its understanding of rising anxiety. The Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association has, since the publication of DSM III in 1980, been considered the authoritative index of mental disorders, codified within a system of diagnostic management. The new fifth edition of 2013 describes ‘Generalized Anxiety Disorder’ as ‘excessive anxiety and worry’, which the individual experiences and finds difficult to control for more days than not for at least six months. It is an uncontrollable worry that largely dominates the sufferer’s time, and usually defined by three or more symptoms, including ‘restlessness’, ‘being easily fatigued’, ‘difficulty concentrating or mind going blank’, ‘irritability’, ‘muscle tension’, and ‘sleep disturbance’. General anxiety concerns an excessive and painful ‘apprehensive expectation’ for an uncertain future event, rather than of the present, as in fear. The disorder isn’t simply a reflection of an individual struggling against unusual duress, but extends to an anxiety about even the most mundane of things, like completing household chores, being late for appointments, or of one’s inadequate performance as a worker or friend.

Depressed Britain

I wonder if the DSM-VI will propose it on a collective scale? These symptoms describe those of the precarious worker, exhausted, fed up, yet compelled to stay awake just to finish a little more work from home, screens stained by old microwave meals, spilt coffee and reminder notes about looming dates, gym reminders and so on. Depression and exhaustion are endemic and act as marks of an affective and immaterial economy where employment is now to be found in the services — retail, leisure, call-centres, cleaning, childcare, sex work — where an inflated mood, one indeed of motivation, is required, as the recent attention to ‘affect’ in critical theory is making clear. Individuality becomes another part of the service worker’s uniform. Recent reports detail increasing depression and anxiety: a 2003 survey by the American Medical Association (AMA) found that 10% of 15-54 year olds surveyed in the US had had an episode of ‘major depression’ in the last 12 months, with 17% of these over the course of their lifetimes; a figure echoing the 15.1% found in the UK to be suffering from ‘common mental disorders’ (stress, anxiety and depression) by the NHS’s most recent 2007 adult survey.

Further, women were twice as likely to suffer from depression in both the AMA and NHS Surveys — the 15.1% average comes from 12.5% in men, 19.7% in women (the real unrecorded numbers are probably higher, and this is still the most recent survey, based on symptoms in the last seven days). The NHS Survey also found that self-harm and suicidal behaviours in women had increased since 2000, with ‘being female’ at one point listed by the survey as a source of depression, without irony or sociological comment. Finally, one-fifth of all working days in Britain are estimated as lost due to anxiety and depression forcing workers to take time off, a very shaky estimate given the stigma and perceived weakness of openly telling managers of mental health problems; but given the current prospect of increasing working hours in Britain as labour regulations are further ‘liberalised’, this anxiety will only continue.

Given the general, non-personal causes of these common mental disorders, evidence beyond the obvious observations of one’s surroundings suggests that living standards are declining, affecting men and women differently, with a high suicide rate amongst men on the one hand — suicide is the single biggest cause of death in men aged 15-34 in England and Wales — and a higher incidence of depression among women on the other. Recent employment statistics demonstrate that women have been adversely affected by the large redundancies within the public services in the UK following the neoliberal austerity cuts, with a 2011 TUC report finding female unemployment had risen 0.5 points to its highest level since 1988. Single-parent families are largely led by females, who are struggling with reduced welfare support, inflation and reduced employment opportunities, while continually demonised by the right-wing media and Conservative governments as ‘feckless’ and irresponsible. Austerity becomes the state of exception of British neoliberalism, with the need for deficit cuts being used both by Thatcher and succeeding governments to further reduce welfare and support services whilst justifying wage freezes and unemployment, which adversely affect women.

Age of Anxiety?

Yet rather than restrict the medicalisation of social issues or universal experiences of human life, the DMS-5 instead created a number of new disorders like ‘disruptive mood disregulation disorder’, for temper tantrums and other wilful behaviour, and extending ‘major depressive disorder’ to include bereavement, against the advice and review of much of the medical establishment, including the producers of previous DSM manuals who have already much to answer for. (But, the influence of DSM should not be too over-stated: beyond the USA, many countries like the UK instead use the WHO International Classification of Diseases). No doubt major pharmaceutical companies will not fail in honourably and dispassionately servicing such individual maladies, and others such discovered in 2013.

In the UK antidepressant usage is rising year on year, more than any other item prescribed. Prescriptions tend to be highest in areas of greater social deprivation (particularly northern towns like Blackpool, Barnsley and Redcar), but with over 50 million such prescriptions dispensed in England alone in 2012, increasing on the previous year by 7.5%, their usage has become democratically universal. The OECD have found that mental health problems now cost the UK economy £70 billion a year, or 4.5% of GDP, primarily through productivity losses and disability payments. Concerned only for economic growth, even the world’s “smartest men” — the neoliberal economists — are starting to doubt the credibility of the UK’s recovery, with more workers reporting mental health disability (just under 40% of all new disability claims) than any other developed country. By 2020, the WHO predicts that depressive disorders will be the leading cause of disability and disease burden across the globe. Researchers have found that a poor material standard of living accounted for nearly 25% of cases of common mental disorder in 1998, a figure which, given increasing poverty, debt and social inequality, will have surely risen.

So is ours an age of anxiety? Previous generations have also claimed this thorny crown, particularly those ravaged by social and economic inequalities like the 1930s. Yet it is in these last few years more than most that anxiety, precarity, crisis and burnout have become regular keywords, and where continuous productivity, connectivity and alertness are demanded at all hours. To anyone who values the lives of other human beings over the growth of stocks, shares and tax-free profits, this situation should be appalling. It will also worsen. To continue insisting that the mass breakdown of workers into malfunctioning anxiety machines is down to some failure of the individual is either callous or blind. As a collective that prizes its own freedom and happiness, then, what is to be done? That old question we always ask and to which, like chronic depressives, we can never commit resolutely to any sure answer. Perhaps, like guerrilla fighters, we might re-purpose these underlying controls identified into weapons of change? Not through some juvenile dream of accelerating the contradictions of capitalism, nor through the perverse belief that university-press-published critical jargon will undo the grip of neoliberal ideology on the lives and hopes of the majority. No, no.

I wonder instead if we might take a cue from the cod-psychology of Neuro-Linguistic Programming (NLP), popular in management and popular life. Though rightly discredited by many experts, NLP considers that the human experience is entirely based on the individual’s cognitive ‘map’ of the world. This mind can be re-programmed to visualise and experience more positive ‘frames’ of mind. It has been adapted for mass audiences, with great success, by TV hypnotist and celebrity advisor Paul McKenna. It shares a number of underlying premises with Cognitive Behavioural Therapy, now one of the most common non-pharmaceutical mental health treatments in Britain, like the onus on the individual to internally manage and remedy negative associations and behaviours. In NLP, common to treatments of individual problems is the requirement to visualise and step into a positive future persona. By adopting and embodying more proactive states, usually by mimicking ‘successful’ figures or a more positive projected self-image, the individual gains confidence and power over themselves and their surroundings. Now what if we were to discard NLP’s neoliberal emphasis on the individual imagining a more positive future self, in favour of a collective imagining of a more positive future society? What would the coherent visualisation and supposition of such a society look like, feel like? A society where equality, liberty and justice were fully supported by institutions of democratic political organization that meaningfully gave citizens power and effectively safeguarded against corruption or military/police abuses? What would the features of these institutions be?

Collective Desire

Our brains and backs are tense and tired, our minds shattered and nerves shot by increasing demands by managers to do the impossible: increase our productivity, when what is produced is less necessary and of worse quality than before. The demand everywhere is the same: do it more, do it quicker, do it better! Never must we act, think, feel or simply be for what is good itself. Part of the problem is that the good itself is never presented or introduced, its possibility unthinkable. By the good I mean a sense of future, not just for oneself, working against difficult circumstances to survive or succeed, but a quality of life that all can democratically produce and enjoy together. Where the needs of society determine economic activity, and where an ethics of public service determine political conduct. Where the qualities of a flourishing society, like unions, welfare, asylum, populism, social service, council, and public are no longer pejorative terms. Instead, people with good intentions on the Left have become confused, like two lost travellers fighting over the interpretation of a map, either pointing blame at other activists, cynically taking the dollar of private finance for more short-term gain, or simply giving in. The exhaustion and depression that some activists are feeling also mirrors this wider dislocation of hope, the good, and a future, from our myriad cognitive maps.

As the basis of our future political activity, we should begin by thinking what is possible and what is desired. Transforming the way we work, live together, understand ourselves, and communicate with each other will require brave new ideas that adapt the benefits of these technologies to the prior wellbeing and welfare of each of us collectively. It won’t be easy. But given the fact that anxiety disorders, suicides and wider mental health problems are rising and becoming normalised to a fairly terrifying extent, I think it’s fair to give these a politicalexplanation. Rising anxiety disorders are connected to the growing pressure on workers to increase their productivity. It is encouraged by the growth of working from home, and smartphone technology, which irrevocably blur the work/life distinction. It is encouraged by the growing power (and pay) conferred to managers, to the rapid decline of workers rights, and of trade unions to legally resist these. It is an effect of the collapsing infrastructure of our communities and the loss of support services that once could help. Therefore, what could be done is to reverse each of these in turn: challenge and, how I dream it to be possible, overthrow governments that act only in the interests of large businesses. To fight for things like a fixed working day, a living wage, and to fight for massive increases in the resources given to support mental health problems. To discuss these things more openly, too — exhaustion is increasingly the norm. And then to politicise these experiences, and begin to dream together and work together to produce the kind of society where mass depression and collective anxiety are banished.

Today we are spent and we are broke, fit only for a few decades of underpaid labour before being cast aside by the markets as unproductive fodder. Things will worsen unless we politicise anxiety and depression, and start the fight to prioritise the welfare of our societies. Many of us feel paralysed, buckling under the pressure to keep it all together but knowing that the way we work — and live — is damaging us and our relationships. Globalisation of neoliberal political and economic practices is now creating an equality of insecurity and misery for all people, particularly the young, who have little chance of getting even a pension or affordable care in their final years. In such a moment of over-extended transition, where the credibility and legitimacy of the 0.01% has never looked weaker, what future is ours? It will be the future that we dream of, that we refuse to abandon, and that we cannot possibly entrust to the deceptive economic motives of our undemocratic elites. Societies must express collective desire or they will not be at all.

Link: Rural > City > Cyberspace

A series of psychological studies over the past 20 years has revealed that after spending time in a quiet rural setting, close to nature, people exhibit greater attentiveness, stronger memory, and generally improved cognition. Their brains become both calmer and sharper. The reason, according to attention restoration theory, or ART, is that when people aren’t being bombarded by external stimuli, their brains can, in effect, relax. They no longer have to tax their working memories by processing a stream of bottom-up distractions. The resulting state of contemplativeness strengthens their ability to control their mind.

The results of the most recent such study were published in Psychological Science at the end of 2008. A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous and mentally fatiguing series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.

The researchers then conducted a similar experiment with another set of people. Rather than taking walks between the rounds of testing, these subjects simply looked at photographs of either calm rural scenes or busy urban ones. The results were the same. The people who looked at pictures of nature scenes were able to exert substantially stronger control over their attention, while those who looked at city scenes showed no improvement in their attentiveness. “In sum,” concluded the researchers, “simple and brief interactions with nature can produce marked increases in cognitive control.” Spending time in the natural world seems to be of “vital importance” to “effective cognitive functioning.”

There is no Sleepy Hollow on the internet, no peaceful spot where contemplativeness can work its restorative magic. There is only the endless, mesmerizing buzz of the urban street. The stimulations of the web, like those of the city, can be invigorating and inspiring. We wouldn’t want to give them up. But they are, as well, exhausting and distracting. They can easily, as Hawthorne understood, overwhelm all quieter modes of thought. One of the greatest dangers we face as we automate the work of our minds, as we cede control over the flow of our thoughts and memories to a powerful electronic system, is the one that informs the fears of both the scientist Joseph Weizenbaum and the artist Richard Foreman: a slow erosion of our humanness and our humanity.

It’s not only deep thinking that requires a calm, attentive mind. It’s also empathy and compassion. Psychologists have long studied how people experience fear and react to physical threats, but it’s only recently that they’ve begun researching the sources of our nobler instincts. What they’re finding is that, as Antonio Damasio, the director of USC’s Brain and Creativity Institute, explains, the higher emotions emerge from neural processes that “are inherently slow.” In one recent experiment, Damasio and his colleagues had subjects listen to stories describing people experiencing physical or psychological pain. The subjects were then put into a magnetic resonance imaging machine and their brains were scanned as they were asked to remember the stories. The experiment revealed that while the human brain reacts very quickly to demonstrations of physical pain – when you see someone injured, the primitive pain centers in your own brain activate almost instantaneously – the more sophisticated mental process of empathizing with psychological suffering unfolds much more slowly. It takes time, the researchers discovered, for the brain “to transcend immediate involvement of the body” and begin to understand and to feel “the psychological and moral dimensions of a situation.”

The experiment, say the scholars, indicates that the more distracted we become, the less able we are to experience the subtlest, most distinctively human forms of empathy, compassion, and other emotions. “For some kinds of thoughts, especially moral decision-making about other people’s social and psychological situations, we need to allow for adequate time and reflection,” cautions Mary Helen Immordino-Yang, a member of the research team. “If things are happening too fast, you may not ever fully experience emotions about other people’s psychological states.” It would be rash to jump to the conclusion that the internet is undermining our moral sense. It would not be rash to suggest that as the net reroutes our vital paths and diminishes our capacity for contemplation, it is altering the depth of our emotions as well as our thoughts.

There are those who are heartened by the ease with which our minds are adapting to the web’s intellectual ethic. “Technological progress does not reverse,” writes aWall Street Journal columnist, “so the trend toward multitasking and consuming many different types of information will only continue.” We need not worry, though, because our “human software” will in time “catch up to the machine technology that made the information abundance possible.” We’ll “evolve” to become more agile consumers of data. The writer of a cover story in New Yorkmagazine says that as we become used to “the 21st-century task” of “fitting” among bits of online information, “the wiring of the brain will inevitably change to deal more efficiently with more information.” We may lose our capacity “to concentrate on a complex task from beginning to end,” but in recompense we’ll gain new skills, such as the ability to “conduct 34 conversations simultaneously across six different media.” A prominent economist writes, cheerily, that “the web allows us to borrow cognitive strengths from autism and to be better infovores.” An Atlantic author suggests that our “technology-induced ADD” may be “a short-term problem,” stemming from our reliance on “cognitive habits evolved and perfected in an era of limited information flow.” Developing new cognitive habits is “the only viable approach to navigating the age of constant connectivity.”

These writers are certainly correct in arguing that we’re being molded by our new information environment. Our mental adaptability, built into the deepest workings of our brains, is a keynote of intellectual history. But if there’s comfort in their reassurances, it’s of a very cold sort. Adaptation leaves us better suited to our circumstances, but qualitatively it’s a neutral process. What matters in the end is not our becoming but what we become. In the 1950s, Martin Heidegger observed that the looming “tide of technological revolution” could “so captivate, bewitch, dazzle, and beguile man that calculative thinking may someday come to be accepted and practiced as the only way of thinking.” Our ability to engage in “meditative thinking,” which he saw as the very essence of our humanity, might become a victim of headlong progress. The tumultuous advance of technology could, like the arrival of the locomotive at the Concord station, drown out the refined perceptions, thoughts, and emotions that arise only through contemplation and reflection. The “frenziedness of technology,” Heidegger wrote, threatens to “entrench itself everywhere.”

It may be that we are now entering the final stage of that entrenchment. We are welcoming the frenziedness into our souls.

Link: Forever Alone: Why Loneliness Matters in the Social Age

I got up and went over and looked out the window. I felt so lonesome, all of a sudden. I almost wished I was dead. Boy, did I feel rotten. I felt so damn lonesome. I just didn’t want to hang around any more. It made me too sad and lonesome.

— J.D. Salinger in Catcher in the Rye

Loneliness was a problem I experienced most poignantly in college. In the three years I spent at Carnegie Mellon, the crippling effects of loneliness slowly pecked away at my enthusiasm for learning and for life, until I was drowning in an endless depressive haze that never completely cleared until I left Pittsburgh.

It wasn’t for lack of trying either. At the warm behest of the orientation counselors, I joined just the right number of clubs, participated in most of the dorm activities, and tried to expand my social portfolio as much as possible.

None of it worked.

To the extent that I sought out CAPS (our student psych and counseling service) for help, the platitudes they offered as advice (“Just put yourself out there!”) only served to confirm my suspicion that loneliness isn’t a very visible problem. (After all, the cure for loneliness isn’t exactly something that could be prescribed. “Have you considered transferring?” they finally suggested, after exhausting their list of thought-terminating clichés. I graduated early instead.)

As prolonged loneliness took its toll, I became very unhappy—to put it lightly—and even in retrospect I have difficulty pinpointing a specific cause. It wasn’t that I didn’t know anyone or failed to make any friends, and it wasn’t that I was alonemore than I liked.

Sure, I could point my finger at the abysmally fickle weather patterns of Pittsburgh, or the pseudo-suburban bubble that envelops the campus. There might even be a correlation between my academic dissonance with computer science and my feelings of loneliness. I might also just be an extremely unlikable person.

For whatever the reason (or a confluence thereof) the reality remained that I struggled with loneliness throughout my time in college.


I recall a conversation with my friend Dev one particular evening on the patio of our dormitory. It was the beginning of my junior and last year at CMU, and I had just finished throwing an ice cream party for the residents I oversaw as an RA.

“Glad to be back?” he asked as he plopped down on a lawn chair beside me.

“No, not really.”

The sun was setting, and any good feelings about the upcoming semester with it. We made small talk about the school in general, as he had recently transferred, but eventually Dev asked me if I was happy there.

“No, not really.”

“Why do you think you’re so miserable here?”

“I don’t know. A lot of things, I guess. But mostly because I feel lonely. Like I don’t belong, like I can’t relate to or connect with anyone on an emotional level. I haven’t made any quality relationships here that I would look back on with any fond memories. Fuck… I don’t know what to do.”

College, at least for me, was a harrowing exercise in how helplessly debilitating, hopelessly soul-crushing, and at times life-threatening loneliness could be. It’s a problem nobody talks about, and it’s been a subject of much personal relevance and interest.

Loneliness as a Health Problem

A recent article published on Slate outlines the hidden dangers of social isolation. Chronic loneliness, as Jessica Olien discovered, poses serious health risks that not only impact mental health but physiological well-being as well.

The lack of quality social relationships in a person’s life has been linked to an increased mortality risk comparable to smoking and alcohol consumption and exceeds the influence of other risk factors like physical inactivity and obesity. It’s hard to brush off loneliness as a character flaw or an ephemeral feeling when you realize it kills more people than obesity.

Research also shows that loneliness diminishes sleep quality and impairs physiological function, in some cases reducing immune function and boosting inflammation, which increases risk for diabetes and heart disease.

Why hasn’t loneliness gotten much attention as a medical problem? Olien shares the following observation:

As a culture we obsess over strategies to prevent obesity. We provide resources to help people quit smoking. But I have never had a doctor ask me how much meaningful social interaction I am getting. Even if a doctor did ask, it is not as though there is a prescription for meaningful social interaction.

As a society we look down upon those who admit to being lonely, we cast and ostracize them with labels like “loners” insofar as they prefer to hide behind shame and doubt rather than speak up. This dynamic only makes it harder to devise solutions to what is clearly a larger societal issue, and it certainly brings to question the effects of culture on our perception of loneliness as a problem.

Loneliness as a Culture Problem

Stephen Fry, in a blog post titled Only the Lonely which explains his suicide attempt last year, describes in detail his struggle with depression. His account offers a rare and candid glimpse into the reality of loneliness with which those afflicted often hide from the public:

Lonely? I get invitation cards through the post almost every day. I shall be in the Royal Box at Wimbledon and I have serious and generous offers from friends asking me to join them in the South of France, Italy, Sicily, South Africa, British Columbia and America this summer. I have two months to start a book before I go off to Broadway for a run of Twelfth Night there.

I can read back that last sentence and see that, bipolar or not, if I’m under treatment and not actually depressed, what the fuck right do I have to be lonely, unhappy or forlorn? I don’t have the right. But there again I don’t have the right not to have those feelings. Feelings are not something to which one does or does not have rights.

In the end loneliness is the most terrible and contradictory of my problems.

In the United States, approximately 60 million people, or 20% of the population, feel lonely. According to the General Social Survey, between 1985 and 2004, the number of people with whom the average American discusses important mattersdecreased from three to two, and the number with no one to discuss important matters with tripled.

Modernization has been cited as a reason for the intensification of loneliness in every society around the world, attributed to greater migration, smaller household sizes, and a larger degree of media consumption.

In Japan, loneliness is an even more pervasive, layered problem mired in cultural parochialisms. Gideon Lewis-Kraus pens a beautiful narrative on Harper’s in which he describes his foray into the world of Japanese co-sleeping cafés:

“Why do you think he came here, to the sleeping café?”

“He wanted five-second hug maybe because he had no one to hug. Japan ishaji culture. Shame. Is shame culture. Or maybe also is shyness. I don’t know why. Tokyo people … very alone. And he does not have … ” She thought for a second, shrugged, reached for her phone. “Please hold moment.”

She held it close to her face, multitouched the screen not with thumb and forefinger but with tiny forefinger and middle finger. I could hear another customer whispering in Japanese in the silk-walled cubicle at our feet. His co-sleeper laughed loudly, then laughed softly. Yukiko tapped a button and shone the phone at my face. The screen said COURAGE.

It took an enormous effort for me to come to terms with my losing battle with loneliness and the ensuing depression at CMU, and an even greater leap of faith to reach out for help. (That it was to no avail is another story altogether.) But what is even more disconcerting to me is that the general stigma against loneliness and mental health issues, hinging on an unhealthy stress culture, makes it hard for afflicted students to seek assistance at all.

As Olien puts it, “In a society that judges you based on how expansive your social networks appear, loneliness is difficult to fess up to. It feels shameful.”

To truly combat loneliness from a cultural angle, we need to start by examining our own fears about being alone and to recognize that as humans, loneliness is often symptomatic of our unfulfilled social needs. Most importantly, we need to accept that it’s okay to feel lonely. Fry, signing off on his heartfelt post, offers this insight:

Loneliness is not much written about (my spell-check wanted me to say that loveliness is not much written about—how wrong that is) but humankind is a social species and maybe it’s something we should think about more than we do.

Loneliness as a Technology Problem

Technology, and by extension media consumption in the Internet age, adds the most perplexing (and perhaps the most interesting) dimension to the loneliness problem. As it turns out, technology isn’t necessarily helping us feel more connected; in some cases, it makes loneliness worse.

The amount of time you spend on Facebook, as a recent study found, is inversely related to how happy you feel throughout the day.

Take a moment to watch this video.

It’s a powerful, sombering reminder that our growing dependence on technology to communicate has serious social repercussions, to which Cohen presents his central thesis:

We are lonely, but we’re afraid of intimacy, while the social networks offer us three gratifying fantasies: 1) That we can put our attention wherever we want it to be. 2) That we will always be heard. 3) That we will never have to be alone.

And that third idea, that we will never have to be alone, is central to changing our psyches. It’s shaping a new way of being. The best way to describe it is:

I share, therefore I am.

Public discourse on the cultural ramifications of technology is certainly not a recent development, and the general sentiment that our perverse obsession with sharing will be humanity’s downfall continues to echo in various forms around the web: articles proclaiming that Instagram is ruining people’s lives, the existence of a section on Reddit called cringepics where people congregate to ridicule things others post on the Internet, the increasing number of self-proclaimed “social media gurus” on Twitter, to name a few.

The signs seem to suggest we have reached a tipping point for “social” media that’s not very social on a personal level, but whether it means a catastrophic implosion or a gradual return to more authentic forms of interpersonal communications remains to be seen.

While technology has been a source of social isolation for many, it has the capacity to alleviate loneliness as well. A study funded by the online dating site eHarmony shows that couples who met online are less likely to divorce and achieve more marital satisfaction than those who met in real life.

The same model could potentially be applied to friendships, and it’s frustrating to see that there aren’t more startups leveraging this opportunity when the problem is so immediate and in need of solutions. It’s a matter of exposure and education on the truths of loneliness, and unfortunately we’re just not there yet.


The perils of loneliness shouldn’t be overlooked in an increasingly hyperconnected world that often tells another story through rose-tinted lenses. Rather, the gravity of loneliness should be addressed and brought to light as a multifaceted problem, one often muted and stigmatized in our society. I learned firsthand how painfully real of a problem loneliness could be, and more should be done to spread its awareness and to help those affected.

“What do you think I should do?” I looked at Dev as the last traces of sunlight teetered over the top of Morewood Gardens. It was a rhetorical question—things weren’t about to get better.

“Find better people,” he replied.

I offered him a weak smile in return, but little did I know then how prescient those words were.

In the year that followed, I started a fraternity with some of the best kids I’d come to know (Dev included), graduated college and moved to San Francisco, made some of the best friends I’ve ever had, and never looked back, if only to remember, and remember well, that it’s never easy being lonely.

Link: Reason Displaces All Love

Libidinal economizing in the early Soviet Union.

“She had suffered an acute attack of ‘love’- the name given to a disease of ancient times when sexual energy, which should be rationally distributed over one’s entire  lifetime, is suddenly concentrated into one inflammation lasting a week, leading to absurd and incredible behavior.” —Vladimir Mayakovsky, The Bedbug

In summer 1956, six tons of books were thrown by court order into the public incinerator on 25th Street in New York City. Those smouldering pages were written by Wilhelm Reich, who died in jail shortly thereafter, infamously denounced as the fraudulent peddler of “orgone,” a mystical cosmic life force. As a young communist psychoanalyst in interwar Vienna, Reich had argued that capitalism unhealthily restrains primal sexual instincts, and that a genuine political revolution would shatter the constraints of bourgeois sexual morality, unleashing sexual energies through a kind of wild orgasmic release.

In 1929, Reich visited the Soviet Union, where psychoanalysis would soon be outlawed, and was rather scathing of the psychologists he met there, including one of his hosts, Aron Zalkind, a leading figure in the psychological community in Moscow. Zalkind was the author of the influential treatise “12 Commandments for the Sexual Revolution of the Proletariat,” first published in 1925, which argued that the capitalist free market was incompatible with what he somewhat confusingly called “free love,” given that he meant something like the opposite of what it meant in the 1960s. Unlike Reich, whose prurient embrace of unrestrained lovemaking was to be enthusiastically championed during the “sexual revolution” of the 1960s, Zalkind advocated sexual abstinence as the appropriate conduct for the revolutionary proletariat.

During the period of the New Economic Policy (1921–1928), which saw the reintroduction of certain forms of private enterprise into the Soviet economy, sexual relations were being renegotiated for both ideological and practical reasons. As the heroine of Feodor Gladkov’s 1925 novel Cementobserves: “Everything is broken up and changed and become confused. Somehow love will have to be arranged differently.” But how exactly love was to be arranged was unclear. Although the fledgling Soviet government had legalized divorce and abortion, secularized marriage, and decriminalized homosexuality, and although women’s roles in the home and workforce were being concretely transformed, Zalkind’s emphasis on sexual inhibition is characteristic of the ambivalence toward sex during the NEP period.

Zalkind’s commandments were as follows:

  1. Sexuality should not develop too early.
  2. Sex should not occur before marriage.
  3. Sex on the basis of pure physical attraction should be renounced.
  4. Sex should only result from “deep and complex feeling” between comrades.
  5. Sex should be infrequent.
  6. Sexual partners should not be changed too frequently.
  7. Sexual relationships should be monogamous.
  8. Every sex act should be committed with the awareness that it might lead to the birth of a child.
  9. Sexual partners should be selected on the basis of class. (“Sexual attraction to class antagonism, to a morally disgusting, dishonest object, is as perverse as the sexual desire of a human for a crocodile or an orangutan.”)
  10. There should be no jealousy.
  11. There should be no “sexual perversions.”
  12. In the interests of the revolution, it is the duty of the proletariat to intervene in the sex lives of others.

Zalkind relies on an economic, quantitative conception of psychic sexual energy or libido borrowed from Freud. In the interest of self-preservation, the fragile organism must protect itself from both external and internal excitations, and the constant tension between pleasure and unpleasure must be regulated through sublimation, repression, and cathexis. Or in Zalkind’s inelegant phrasing, “The body is stuffed with a certain amount of energy, a certain amount of internal stress and excitement, which erupts on the outside.”

In The Future of an Illusion — the last of Freud’s works to appear in Russian translation in 1930, with a hostile introduction by Zalkind — Freud is dismissive of those who would claim that “a reordering of human relations” might overcome the necessarily repressive character of society, stating that “every civilization must be built up on coercion and renunciation of instinct,” (though he explicitly declares that his conclusions are not intended as a comment on the “great experiment in civilization” occurring in Russia). Unlike Reich, Zalkind does not contradict Freud on this point. He may imagine repression and sublimation as conscious, voluntary, and collective, but he insists that communism cannot be built without forgoing immediate gratification. The oft-repeated Soviet injunction to make sacrifices in the present to reap the eventual benefits of the bright Communist future corresponds to Freud’s reality principle, defined inBeyond the Pleasure Principle as the “temporary toleration of unpleasure as a step on the long indirect road to pleasure.”

Freud argued that giving the instincts free rein would be dangerous. Civilization is a by-product of repressed instincts rather than the result of some immanent tendency toward progress or perfectibility. By assuming that renouncing pleasure will ultimately lead to a superior form of society, Zalkind’s argument is more explicitly value-laden: Sex too much, too soon, too often or with too many people diverts energy that could otherwise be used for building the new Communist society. For Zalkind, sexual desire does not originate in the seething depths of the primitive unconscious. Sex is morally rather than mortally dangerous; it is wasteful and frivolous rather than primal and destructive.

In Freud’s theory, the regulation of psychic energy remains largely metaphorical. But Zalkind insists that Freudian theory has a materialist essence; his more literal conception of energy thus has a closer relation to contemporary discussions of labor efficiency and industrial production. In tune with this infamously Taylor-obsessed period, Zalkind focuses on management, rationality, organization, and discipline.

But if under capitalism, energy expenditure is primarily concerned with maximized productivity and profitability in the workplace, in communism all human activity is up for grabs, including people’s most intimate encounters. Any unnecessary exertion might deviate resources that could otherwise be spent building the new classless society. Zalkind’s quantification of energy allows for the commensurability of action. As historian Anson Rabinbach puts it in The Human Motor: Energy, Fatigue, and the Origins of Modernity, “Energy is the universal equivalent of the natural world, as money is the universal equivalent of the world of exchange.”

Building barricades, constructing dams, designing factories, or fucking your comrades — all activity is reduced to the amount of energy they require to perform. Zalkind imagines a scenario in which a worker is insulted by his boss. Such an event, he claims, produces a fixed volume of anger, which will inevitably “break out”: The worker might erupt and throw a plate at his wife. But instead, the energy could be positively channelled into organizing a demonstration or distributing agitational pamphlets.

Zalkind’s vision recalls Yevgeny Zamyatin’s 1921 dystopian novel We, in which controlled copulation can be performed only during the alloted “sex hour,” when people are permitted to lower the curtains in their glass homes, and encounters must be tracked with a pink ration book of signed tokens. But these concerns were not confined to the pages of science fiction: Some married couples in the period actually attempted to organize their domestic chores and sex lives on the basis of the Scientific Organization of Labor.

Despite his likening of the libido to a flowing liquid, Freud’s conception of the unconscious knows no spatial constraints – quantity has no meaningful existence there. In bourgeois Vienna, there is no suggestion that a patient’s libidinal resources might simply run out; their sexual drives are understood in relation to their historical experiences rather than their physical well-being.

But in post-revolutionary Russia there was a genuine fear that people were literally running out of energy. Zalkind’s anxieties about squandering libidinal currency rely on a physiological understanding of energy developed amid acute privation. “Exhaustion” was rife among revolutionaries; Lenin’s death in 1924 from a brain hemorrhage was said to have been provoked by his excessive exertions on behalf of the global revolutionary proletariat. Hunger, often accompanied by energy-sapping cold, gnaws insistently in first-hand accounts of the period. Revolution and Youth, the book in which Zalkind’s proclamations were originally published, includes detailed nutritional charts to ensure revolutionaries retain optimal “brain fuel.” Victor Serge’sMemoirs of a Revolutionary constantly returns to the subject of food (or lack of it), its pages strewn with paltry, unappetizing morsels. Stoic revolutionaries survive on black bread, dried fish, coffee made from raw oats, rotten horsemeat, and the odd spoonful of sugar. This nutritional dearth had sexual implications: As a result of malnutrition, impotence was widespread.

Link: Does Money Make People Right-Wing and Inegalitarian? A Longitudinal Study of Lottery Winners

The causes of people’s political attitudes are largely unknown. We study this issue by exploiting longitudinal data on lottery winners. Comparing people before and after a lottery windfall, we show that winners tend to switch towards support for a right-wing political party and to become less egalitarian. The larger the win, the more people tilt to the right. This relationship is robust to (i) different ways of defining right-wing, (ii) a variety of estimation methods, and (iii) methods that condition on the person previously having voted left. It is strongest for males. Our findings are consistent with the view that voting is driven partly by human self-interest. Money apparently makes people more right-wing.

1. Introduction

Voting is the foundation of modern democracy. The causal roots of people’s political

preferences, however, are imperfectly understood. One possibility is that individuals’ attitudes to politics and redistribution are motivated by deeply ethical views. Another possibility—perhaps the archetypal economist’s presumption—is that voting choices are made out of self-interest and then come to be embroidered in the mind with a form of moral rhetoric. Testing between these two alternative theories is important intellectually. It is also inherently difficult. That is because so many of our attitudes as humans could stem from early in life and are close to being, in the eyes of the econometrician, a person fixed-effect.

This study proposes a new empirical test. It provides longitudinal evidence consistent with the second, and some might argue more jaundiced, view of human beings. We exploit a panel data set in which people’s political attitudes are recorded annually. In the data set, some individuals serendipitously receive lottery windfalls. We find that the larger is their lottery win, the greater is that person’s subsequent tendency, after controlling for other influences, to switch their political views from left to right. We also provide evidence that lottery winners are more sympathetic to the belief that ordinary people ‘already get a fair share of society’s wealth’.

Access to longitudinal information gives us advantages denied to most previous researchers on this topic. It is possible to observe people’s political attitudes before and after events. Although panel data cannot resolve every difficulty of establishing cause-and-effect relationships, they in general allow sharper testing than do cross-section data. Our inquiry combines panel data with a randomized-income element that stems from the nature of lottery windfalls. This study is thus robust to the concern that omitted fixed-effect factors might explain the different political attitudes of large and small winners. One reason this is important is because it seems plausible that personality might determine both the number of lottery tickets bought and the political attitudes of the person, and this might thereby lead to a possible spurious association between winning and right-leaning views. We provide, among other kinds of evidence, a simple graphical demonstration that winners disproportionately lean to the right having previously not been right-wing supporters.

The study draws upon a nationally representative sample from the British population. In the later regression equations we focus particularly upon a sub-sample of people (a fairly large proportion, given the lottery’s popularity in Great Britain) who have ever had a lottery win. Within this group, we are especially interested in the observed longitudinal changes in political allegiance of the bigger winners compared to the smaller winners. Our key information stems from 541 observations on lottery wins larger than 500 pounds.

2. Background

The fact that high income and right-wing views are positively correlated in a cross- section has been repeatedly documented in quantitative social science (recently, for example, by Brooks and Brady 1999 and Gelman et al. 2007 in US data, and by Evans and Tilley 2012 in British data). A somewhat analogous result is reported, using quite different kinds of methods, in Karabarbounis (2011). The difficulty is to know how to interpret this famous correlation of political science. Is it truly cause-and-effect, and if so in what direction?

Our inquiry fits into a modern literature that tries to distinguish causal from correlational relationships in people’s voting patterns. Dunning (2008) describes the methodological ideas in the early literature. Examples of recent contributions are Erikson and Stoker (2011), who look at the influence of Vietnam lottery-draft numbers, and Oswald and Powdthavee (2010), who study the longitudinal influence of having daughters rather than sons.

A particularly relevant study for our work is the cross-sectional paper of Doherty, Gerber and Green (2006). These researchers examine the political views of approximately 340 American lottery winners. Although the authors have point-in-time rather than longitudinal data, so are unable to observe switching, they document evidence of hostility among US lottery winners—compared to a set of selected control individuals—to certain kinds of taxation, especially to estate taxes. The authors do not test for a Republican/Democrat split; but they give some evidence, of a kind that is on the margin of statistical significance, that their lottery winners do not favour government-led redistribution. The Doherty et al. (2006) paper also gives a fine account of the strengths and potential weaknesses of lottery studies. As the authors explain, a difficulty with inference from cross- sections of lottery winners is that the winners who agree to take part in a study may not be identical to the ideal cross-section of control individuals who did not win.

More broadly, this paper fits within a tradition of work on the nature of endogenous preferences in human beings (see, for instance, Bowles 1998).

The stigmatized individual is asked to act so as to imply neither that his burden is heavy nor that bearing it has made him different from us; at the same time he must keep himself at that remove from us which assures our painlessly being able to confirm this belief about him. Put differently, he is advised to reciprocate naturally with an acceptance of himself and us, an acceptance of him that we have not quite extended to him in the first place. A phantom acceptance is thus allowed to provide the base for a phantom normalcy.
— Erving Goffman, The Presentation of Self in Everyday Life

Link: Kill Your Martyrs

Many of us have a habit of being overly credulous to stories that flatter our biases.

When I was 19, maybe 20, I took a sociology class at Middlesex Community Technical College in my hometown of Middletown, Connecticut. In the class, we read The Mole People, by Jennifer Toth. The book is an ostensibly nonfiction account of the destitute people in New York City who, driven by homelessness or mental illness or both, live underground in the labyrinthine tunnels that run under the pavement. The book is Toth’s narrative, personal and passionate, about her trips below the surface, where she befriended the people who scratched out desperate lives there. Over the course of many visits, some accompanied by a violent but sympathetic criminal she refers to as Blade, Toth explored these spaces and found not just people but something like community. Though they lived the most precarious of lives, the people Toth wrote about helped each other where possible and cobbled together some semblance of a functioning social space in the most improbable of locales. In the end, Toth flees the tunnels and Blade, frightened for her life but still amazed at what the destitute and forgotten have built underground.

At the time, it moved me deeply, and I needed to be moved. Toth didn’t pull punches about the desperation and risk that these people lived with, and in many ways the book served as an indictment of a New York City, and an America, in which the elect could live lives of affluence while poor and mentally ill people scratched out survival literally underneath them. But the book also seemed a testament to the human desire for community and the ways people can look out for one another. It was a lesson about the drive for a society built on mutual responsibility. The conditions these people faced made me depressed, but their dedication to improving each other’s lives brought me hope.

Unfortunately, it appears that very little of it was true.

Years after its publication, Joseph Brennan, a systems engineer at Columbia University, set out to verify the details of Toth’s book. He did this in a brutally efficient way: by investigating the physical architecture of the places Toth had claimed to visit. Brennan compared the places Toth describes in her books with the physical reality, visiting them himself and checking them against blueprints, maps, and plans. Again and again, he found the areas she described to be in reality substantially different or nonexistent. After presenting his evidence, Brennan writes, “Every fact in this book that I can verify independently is wrong.” A reporter got in touch with Toth following Brennan’s allegations, and her response, such as it was, almost amounts to an admission of guilt— hedging on her past descriptions, admitting she had only been below the surface two or three times, and referring the reporter to a woman who actually refuted  Toth’s version of events.

The book’s dubious claim to being nonfiction has not dimmed its popularity. It is still in print. It carries no disclaimers. Dozens of Amazon reviews praise it, including several that describe it as truth that’s stranger than fiction.

Looking back, I felt stupid for having believed the book in the first place; what Toth described should have set off alarms even without any independent vetting. (And, really … “Blade”?) But the truth is, I didn’t see any problem with it precisely because I was so invested in its vision of destitute people coming together for their mutual good.

Many of us have a habit of being overly credulous to stories that flatter our biases.

A few years before I read The Mole People, I read I, Rigoberta Menchú, a personal narrative of a Guatemalan woman of indigenous descent who endured the horrors of the Guatemalan civil war, in a high school class. By that time, the book’s factual authenticity had been challenged, though neither we in the class nor our teacher seemed to know. Had I known at the time, in my teenaged righteousness, I would have been outraged. Now, I’m less sure. Even if the events that Menchú detailed were not supportable, the horrors in the book reflected the reality of Guatemala and what the United States had condoned and supported. The book was one of those rare vehicles for showing Americans recent crimes against humanity in which their government was complicit, but its factual inaccuracies became the instrument through which the larger, perfectly accurate story of Guatemala was dismissed.

You could turn it over in your mind again and again, and I have. What is the value of compelling and righteous political narrative if it comes at the expense of the facts?


So here are some facts, then.

On the night of October 6, 1998, in Laramie, Wyoming, a Matthew Shepard, a gay 21-year-old student at the University of Wyoming, was brutally attacked. His killers, Aaron McKinney and Russell Henderson, drove him from a bar to a secluded area, where they ambushed him. He was strapped to a fence post, beaten, and pistol-whipped until his brain stem was crushed. The attack constituted not just murder but torture, the killers making special effort to ensure that Shepard suffered, even removing his shoes on a freezing-cold evening. Shepard lay hanging, likely brain dead but unquestionably suffering, for hours. He lingered for five days in the hospital before he succumbed to his injuries. His killers were apprehended, confessed, were fairly tried, and fairly convicted for the murder. They will and should spend the rest of their lives in prison. Shepard left behind a grieving family, a shocked community, and a disgusted nation.

Beyond that, it seems, little is certain.

Those facts are among the few that Stephen Jimenez does not trouble in his meticulous, frequently maddening, and necessarily incomplete investigation of the Shepard murder,The Book of Matt. Jimenez spent the better part of a decade in Laramie investigating the killing, what precipitated it, and its aftermath. He interviewed, at length, members of the police who participated in the investigation, members of the legal teams involved in the trial, friends of Shepard’s and his attackers, and various community figures. His findings, the preliminary version of which was presented in a immediately notorious episode of ABC’s 20/20, are not kind to the received version of the Shepard story.

The story that has been ingrained in the public consciousness—and my own—is the perfect picture of a hate crime. A young gay man in a conservative town in a conservative state goes to a bar, where he chats with a couple of young local men. Maybe he flirts, maybe he just strikes up conversation, but in any event, they learn of his homosexuality. In a fit of gay panic, they lure him into their car, with the promise of a ride home, then betray his trust by robbing and murdering him, all because he was guilty of the sin of being gay.

This was always a leaky narrative. Laramie is not a uniquely hostile environment for a young gay man but a fairly progressive college town, as Robert Blanchard argued in a 1999 piece for Reason. But Jimenez’s argument goes much deeper. Based on the testimony he has collected, he argues that McKinney and Henderson in fact knew Shepard well, and his homosexuality was openly understood among the three of them. McKinney, Jimenez argues, had a history of homosexual encounters in his past, which certainly adds relevant context to a narrative of a gay bash, if true. With less certainty, Jimenez suggests that McKinney and Shepard had a sexual history. Most controversial of all, Jimenez argues that not only were McKinney and Henderson players in the Laramie drug scene (an uncontroversial claim) and under the influence of drugs on the night of the attack but that Shepard himself was a regular user of crystal meth and likely an occasional dealer. The galvanizing story of a cruel hate crime thus becomes instead a tangled narrative of sex and drugs and depression. No wonder the book has encountered so much resistance.

Much resistance, but shockingly little review. Despite its pedigree, publisher, and subject matter (Matthew Shepard’s murder was one of the most important political moments of the ’90s and without exaggeration can be said to have contributed tremendously to the fight for gay marriage that took place in the 2000s), the press has widely ignored The Book of Matt. To their credit, The NationThe Advocate, andThe Guardian have all run fair, appropriately critical considerations of Jimenez’s book, but bizarrely, there has been no review in the New York TimesThe New York Review of Books, or The New Yorker. No consideration that I can find in The Atlantic or The New Republic. This silence suggests that many in the establishment media would simply rather not look too closely at the book or the events it describes.

When the major publications that define conventional wisdom fail to engage with a text, inevitably, partisan media rushes in to fill the vacuum. Jimenez’s book has been taken up by the right-wing press, which has prompted Luke Brinker, in an angry piece for Media Matters, to insist that Jimenez, who is himself gay, deliberately framed his book to perform a hatchet job on Shepard and undermine the gay rights movement. But the book actually attempts to reframe Shepard as a plausible human being, complex and fallible, rather than the secular saint he has been made into.

Brinker calls attention to Jimenez’s use of anonymous sources and sources whose credibility is suspect — arguably inevitable in an investigation of a crime involving drugs and sexual habits that took place 15 years ago. But if Jimenez got his reporting wrong, why has no one else attempted to do better reporting? The broad silence about this book plays into the hands of those on the right asserting some sort of gay media conspiracy.

Not that there isn’t any valid reason to criticize Jimenez’s claims. The corroboration for all of the tangled assertions in Jimenez’s book is inconsistent, as any reported history will be, and Jimenez’s readiness to accept the counternarrative frequently made me uncomfortable. Indeed, the book is a lesson in the seductiveness of opposing the common narrative, which leads Jimenez to undermines his case by eagerly overstating it. What would be truly beneficial is if Jimenez was willing to say “we don’t know what happened” with greater zeal than when he suggests a controversial version of events, as when he suggests that McKinney and Shepard had a sexual relationship.

Fairly or not, given the incendiary nature of his charges and the constancy with which he injects himself into his reporting , the story about The Book of Matt was always going to be about Jimenez’s credibility. He does himself no favors with his departures from straight journalism. In a scathing reviewat ThinkProgress (which, like Brinker’s piece, largely avoids the actual factual controversies at the heart of the book), Alyssa Rosenberg points out that Jimenez announces at the beginning that he has engaged in some “slightly less stringent” methods. This makes it too easy to dismiss him in ways that are unhelpful.

The question is whether Jimenez meant to actively court this kind of criticism. Two things become clear to me as I read The Book of Matt:that Jimenez has undertaken a enormous effort to produce a sensationalistic but profoundly necessary piece of reporting, and that Jimenez deeply enjoys his position as a rabble-rouser and iconoclast. He participated in a series of video interviews about his book for Andrew Sullivan’s blog The Dish, one of the few prominent outlets to give him a forum to defend his work. At times, he acquits himself well, other times, less so. While I certainly don’t begrudge Jimenez the opportunity to stand up for his work, there is a self-aggrandizing and pious quality to his public reaction to the controversy. The worst moments in his book are ones that reveal his self-seriousness. He writes about himself in a way that biographers typically reserve for a hero or crusader. In a text that proudly takes aim at the pieties and pretense of the gay rights movement, the tendency to devolve into self-mythologizing becomes not merely annoying but intolerable. If Jimenez’s critics are guilty of turning attention away from his factual claims about the Shepard case toward issues regarding his motives as a reporter, unfortunately Jimenez often is too.

Still, Jimenez cites no less than ten sources as the evidence for a prior relationship, of whatever kind, between Shepard and his murderers. He also details the large number of mutual friends and acquaintances shared by Shepard and his killers, leaving the odds of the three of them never interacting extremely low, particularly in a small town like Laramie. If all his claims are inventions, why are so many people in Laramie willing to corroborate them? Why would one of the chief investigators of the murder, Ben Fritzen, claim that the murder “comes down to drugs and money”? Why would Ted Henson, a sexual partner of Shepard’s, corroborate Shepard’s relationship with McKinney? Why would so many of the people Jimenez interviewed lend credence to Jimenez? How would they profit from lying? And why is the burden of proof assumed to lie solely on Jimenez rather than on those defending a conventional wisdom unsupported by reporting as thorough and extensive as Jimenez’s? The book has given our broader media a perfect opportunity to explore these questions and perhaps to rebut Jimenez’s claims. That opportunity has been met with silence.

Link: Life as a Nonviolent Psychopath

In 2005, James Fallon’s life started to resemble the plot of a well-honed joke or big-screen thriller: A neuroscientist is working in his laboratory one day when he thinks he has stumbled upon a big mistake. He is researching Alzheimer’s and using his healthy family members’ brain scans as a control, while simultaneously reviewing the fMRIs of murderous psychopaths for a side project. It appears, though, that one of the killers’ scans has been shuffled into the wrong batch.

The scans are anonymously labeled, so the researcher has a technician break the code to identify the individual in his family, and place his or her scan in its proper place. When he sees the results, however, Fallon immediately orders the technician to double check the code. But no mistake has been made: The brain scan that mirrors those of the psychopaths is his own.

After discovering that he had the brain of a psychopath, Fallon delved into his family tree and spoke with experts, colleagues, relatives, and friends to see if his behavior matched up with the imaging in front of him. He not only learned that few people were surprised at the outcome, but that the boundary separating him from dangerous criminals was less determinate than he presumed. Fallon wrote about his research and findings in the book The Psychopath Inside: A Neuroscientist’s Personal Journey Into the Dark Side of the Brain, and we spoke about the idea of nature versus nurture, and what—if anything—can be done for people whose biology might betray their behavior.

One of the first things you talk about in your book is the often unrealistic or ridiculous ways that psychopaths are portrayed in film and television. Why did you decide to share your story and risk being lumped in with all of that?

I’m a basic neuroscientist—stem cells, growth factors, imaging genetics—that sort of thing. When I found out about my scan, I kind of let it go after I saw that the rest of my family’s were quite normal. I was worried about Alzheimer’s, especially along my wife’s side, and we were concerned about our kids and grandkids. Then my lab was busy doing gene discovery for schizophrenia and Alzheimer’s and launching a biotech start-up from our research on adult stem cells. We won an award and I was so involved with other things that I didn’t actually look at my results for a couple of years.

This personal experience really had me look into a field that I was only tangentially related to, and burnished into my mind the importance of genes and the environment on a molecular level. For specific genes, those interactions can really explain behavior. And what is hidden under my personal story is a discussion about the effect of bullying, abuse, and street violence on kids.

You used to believe that people were roughly 80 percent the result of genetics, and 20 percent the result of their environment. How did this discovery cause a shift in your thinking?

I went into this with the bias of a scientist who believed, for many years, that genetics were very, very dominant in who people are—that your genes would tell you who you were going to be. It’s not that I no longer think that biology, which includes genetics, is a major determinant; I just never knew how profoundly an early environment could affect somebody.

While I was writing this book, my mother started to tell me more things about myself. She said she had never told me or my father how weird I was at certain points in my youth, even though I was a happy-go-lucky kind of kid. And as I was growing up, people all throughout my life said I could be some kind of gang leader or Mafioso don because of certain behavior. Some parents forbade their children from hanging out with me. They’d wonder how I turned out so well—a family guy, successful, professional, never been to jail and all that.

I asked everybody that I knew, including psychiatrists and geneticists that have known me for a long time, and knew my bad behavior, what they thought. They went through very specific things that I had done over the years and said, “That’s psychopathic.” I asked them why they didn’t tell me and they said, “We did tell you. We’ve all been telling you.” I argued that they had called me “crazy,” and they all said, “No. We said you’re psychopathic.”

I found out that I happened to have a series of genetic alleles, “warrior genes,” that had to do with serotonin and were thought to be at risk for aggression, violence, and low emotional and interpersonal empathy—if you’re raised in an abusive environment. But if you’re raised in a very positive environment, that can have the effect of offsetting the negative effects of some of the other genes.

I had some geneticists and psychiatrists who didn’t know me examine me independently, and look at the whole series of disorders I’ve had throughout my life. None of them have been severe; I’ve had the mild form of things like anxiety disorder and OCD, but it lined up with my genetics.

The scientists said, “For one, you might never have been born.” My mother had miscarried several times and there probably were some genetic errors. They also said that if I hadn’t been treated so well, I probably wouldn’t have made it out of being a teenager. I would have committed suicide or have gotten killed, because I would have been a violent guy.

How did you react to hearing all of this?

I said, “Well, I don’t care.” And they said, “That proves that you have a fair dose of psychopathy.” Scientists don’t like to be wrong, and I’m narcissistic so I hate to be wrong, but when the answer is there before you, you have to suck it up, admit it, and move on. I couldn’t.

I started reacting with narcissism, saying, “Okay, I bet I can beat this. Watch me and I’ll be better.” Then I realized my own narcissism was driving that response. If you knew me, you’d probably say, “Oh, he’s a fun guy”–or maybe, “He’s a big-mouth and a blowhard narcissist”—but I also think you’d say, “All in all, he’s interesting, and smart, and okay.” But here’s the thing—the closer to me you are, the worse it gets. Even though I have a number of very good friends, they have all ultimately told me over the past two years when I asked them—and they were consistent even though they hadn’t talked to each other—that I do things that are quite irresponsible. It’s not like I say, Go get into trouble. I say, Jump in the water with me.

What’s an example of that, and how do you come back from hurting someone in that way?

For me, because I need these buzzes, I get into dangerous situations. Years ago, when I worked at the University of Nairobi Hospital, a few doctors had told me about AIDS in the region as well as the Marburg virus. They said a guy had come in who was bleeding out of his nose and ears, and that he had been up in the Elgon, in the Kitum Caves. I thought, “Oh, that’s where the elephants go,” and I knew I had to visit. I would have gone alone, but my brother was there. I told him it was an epic trek to where the old matriarch elephants went to retrieve minerals in the caves, but I didn’t mention anything else.

When we got there, there was a lot of rebel activity on the mountain, so there was nobody in the park except for one guard. So we just went in. There were all these rare animals and it was tremendous, but also, this guy had died from Marburg after being here, and nobody knew exactly how he’d gotten it. I knew his path and followed it to see where he camped.

That night, we wrapped ourselves around a fire because there were lions and all these other animals. We were jumping around and waving sticks on fire at the animals in the absolute dark. My brother was going crazy and I joked, “I have to put my head inside of yours because I have a family and you don’t, so if a lion comes and bites one of our necks, it’s gotta be you.”

Again, I was joking around, but it was a real danger. The next day, we walked into the Kitum Caves and you could see where rocks had been knocked over by the elephants.  There was also the smell of all of this animal dung—and that’s where the guy got the Marburg; scientists didn’t know whether it was the dung or the bats.

A bit later, my brother read an article in The New Yorker about Marburg, which inspired the movieOutbreak. He asked me if I knew about it. I said, “Yeah. Wasn’t it exciting? Nobody gets to do this trip.” And he called me names and said, “Not exciting enough. We could’ve gotten Marburg; we could have gotten killed every two seconds.” All of my brothers have a lot of machismo and brio; you’ve got to be a tough guy in our family. But deep inside, I don’t think that my brother fundamentally trusts me after that. And why should he, right? To me, it was nothing.

After all of this research, I started to think of this experience as an opportunity to do something good out of being kind of a jerk my entire life. Instead of trying to fundamentally change—because it’s very difficult to change anything—I wanted to use what could be considered faults, like narcissism, to an advantage; to do something good.

What has that involved?

I started with simple things of how I interact with my wife, my sister, and my mother. Even though they’ve always been close to me, I don’t treat them all that well. I treat strangers pretty well—really well, and people tend to like me when they meet me—but I treat my family the same way, like they’re just somebody at a bar. I treat them well, but I don’t treat them in a special way. That’s the big problem.

I asked them this—it’s not something a person will tell you spontaneously—but they said, ”I give you everything. I give you all this love and you really don’t give it back.” They all said it, and that sure bothered me. So I wanted to see if I could change. I don’t believe it, but I’m going to try.

In order to do that, every time I started to do something, I had to think about it, look at it, and go: No. Don’t do the selfish thing or the self-serving thing. Step-by-step, that’s what I’ve been doing for about a year and a half and they all like it. Their basic response is: We know you don’t really mean it, but we still like it.

I told them, “You’ve got to be kidding me. You accept this? It’s phony!” And they said, “No, it’s okay. If you treat people better it means you care enough to try.” It blew me away then and still blows me away now. 

But treating everyone the same isn’t necessarily a bad thing, is it? Is it just that the people close to you want more from you?

Yes. They absolutely expect and demand more. It’s a kind of cruelty, a kind of abuse, because you’re not giving them that love. My wife to this day says it’s hard to be with me at parties because I’ve got all these people around me, and I’ll leave her or other people in the cold. She is not a selfish person, but I can see how it can really work on somebody.


I gave a talk two years ago in India at the Mumbai LitFest on personality disorders and psychopathy, and we also had a historian from Oxford talk about violence against women in terms of the brain and social development. After it was over, a woman came up to me and asked if we could talk. She was a psychiatrist but also a science writer and said, “You said that you live in a flat emotional world—that is, that you treat everybody the same. That’s Buddhist.” I don’t know anything about Buddhism but she continued on and said, “It’s too bad that the people close to you are so disappointed in being close to you. Any learned Buddhist would think this was great.” I don’t know what to do with that.

Sometimes the truth is not just that it hurts, but that it’s just so disappointing. You want to believe in romance and have romance in your life—even the most hardcore, cold intellectual wants the romantic notion. It kind of makes life worth living. But with these kinds of things, you really start thinking about what a machine it means we are—what it means that some of us don’t need those feelings, while some of us need them so much. It destroys the romantic fabric of society in a way.

So what I do, in this situation, is think: How do I treat the people in my life as if I’m their son, or their brother, or their husband? It’s about going the extra mile for them so that they know I know this is the right thing to do. I know when the situation comes up, but my gut instinct is to do something selfish. Instead, I slow down and try to think about it. It’s like dumb behavioral modification; there’s no finesse to this, but I said, well, why does there have to be finesse? I’m trying to treat it as a straightaway thing, when the situation comes up, to realize there’s a chance that I might be wrong, or reacting in a poor way, or without any sort of love—like a human.

A few years ago there was an article in The New York Times called, “Can You Call a 9-Year-Old a Psychopath?" The subject was a boy named Michael whose family was concerned about him—he’d been diagnosed with several disorders and eventually deemed a possible psychopath by Dan Waschbusch, a researcher at Florida International University who studies "callous unemotional children." Dr. Waschbusch examines these children in hopes of finding possible treatment or rehabilitation. You mentioned earlier that you don’t believe people can fundamentally change; what is your take on this research?

In the 70’s, when I was still a post-doc student and a young professor, I started working with some psychiatrists and neurologists who would tell me that they could identify a probable psychopath when he or she was only 2 or 3 years old. I asked them why they didn’t tell the parents and they said, “There’s no way I’m going to tell anybody. First of all, you can’t be sure; second of all, it could destroy the kid’s life; and third of all, the media and the whole family will be at your door with sticks and knives.” So, when Dr. Waschbusch came out two years ago, it was like, “My god. He actually said it.” This was something that all psychiatrists and neurologists in the field knew—especially if they were pediatric psychologists and had the full trajectory of a kid’s life. It can be recognized very, very early—certainly before 9-years-old—but by that time the question of how to un-ring the bell is a tough one.

My bias is that even though I work in growth factors, plasticity, memory, and learning, I think the whole idea of plasticity in adults—or really after puberty—is so overblown. No one knows if the changes that have been shown are permanent and it doesn’t count if it’s only temporary. It’s like the Mozart Effect—sure, there are studies saying there is plasticity in the brain using a sound stimulation or electrical stimulation, but talk to this person in a year or two. Has anything really changed? An entire cottage industry was made from playing Mozart to pregnant women’s abdomens. That’s how the idea of plasticity gets out of hand. I think people can change if they devote their whole life to the one thing and stop all the other parts of their life, but that’s what people can’t do. You can have behavioral plasticity and maybe change behavior with parallel brain circuitry, but the number of times this happens is really rare.

So I really still doubt plasticity. I’m trying to do it by devoting myself to this one thing—to being a nice guy to the people that are close to me—but it’s a sort of game that I’m playing with myself because I don’t really believe it can be done, and it’s a challenge.

In some ways, though, the stakes are different for you because you’re not violent—and isn’t that the concern? Relative to your own life, your attempts to change may positively impact your relationships with your friends, family, and colleagues. But in the case of possibly violent people, they may harm others.

The jump from being a “prosocial” psychopath or somebody on the edge who doesn’t act out violently, to someone who really is a real, criminal predator is not clear. For me, I think I was protected because I was brought up in an upper-middle-class, educated environment with very supportive men and women in my family. So there may be a mass convergence of genetics and environment over a long period of time. But what would happen if I lost my family or lost my job; what would I then become? That’s the test.

For people who have the fundamental biology—the genetics, the brain patterns, and that early existence of trauma—first of all, if they’re abused they’re going to be pissed off and have a sense of revenge: I don’t care what happens to the world because I’m getting even. But a real, primary psychopath doesn’t need that. They’re just predators who don’t need to be angry at all; they do these things because of some fundamental lack of connection with the human race, and with individuals, and so on.

Someone who has money, and sex, and rock and roll, and everything they want may still be psychopathic—but they may just manipulate people, or use people, and not kill them. They may hurt others, but not in a violent way. Most people care about violence—that’s the thing. People may say, “Oh, this very bad investment counselor was a psychopath”—but the essential difference in criminality between that and murder is something we all hate and we all fear. It just isn’t known if there is some ultimate trigger. 

Link: Chicken Soup for the Neoliberal Soul

The problems of our time will be solved by our collective capacity to change the world, not self-therapy.

Sam Polk was one of the wolves of Wall Street. In a self-lacerating memoir in Sunday’s New York Times, Polk looks back at his time as a brash young trader and is disgusted with the person that he used to be. “I was a giant fireball of greed.” “I wanted a billion dollars.” “I was lying to myself.”  I, I, I.

Despite the pretty blondes, the NoHo loft, ready entrée to Manhattan’s most exclusive restaurants, and second-row Knicks tickets on speed dial, nothing could make up for the “inner wound,” the spiritual hole that couldn’t be plugged with piles of easy money. Like his fellow “wealth addicts” on the trading floor, Polk was convinced that being rich would solve all of his problems. It didn’t, and he eventually left the wolves’ den (but not before making a killing by short-selling derivatives during the financial crisis).

On first glance, Polk’s harrowing narrative seems like a scathing indictment of neoliberalism. He praises the contributions of working people like his mother, a nurse practitioner, while equating traders and financiers with junkies, always on the lookout for the short-term fix instead of the long-term interests of the system as a whole. But though he’s left Wall Street behind, Polk has not been nearly as successful in escaping the affective and ideological spaces of neoliberalism.

His confession is drenched in the therapeutic language of self-help culture: political phenomena like financialization, inequality, and class power are redefined as personal pathologies to be treated with psychotherapy and support groups for those poor souls suffering from wealth addiction. In thecoup de grace, Polk ends by — you guessed it — founding a non-profit organization to make up for all the bad things he’s done.

Its purpose? Not to combat the power of finance capital, or even to attend to the spiritual needs of former Wall Street traders, but the eating habits of poor people.

In her excellent new book Coming Up Short: Working-Class Adulthood in an Age of Uncertainty, the sociologist Jennifer Silva analyzes the ways in which neoliberalism has radically transformed our sense of self. As Silva argues, the assault on working-class organizations and living standards has led many young adults to adopt a profoundly individualistic and therapeutic view of the world and their personal development.

The scores of young workers that she interviewed for her study had no faith in politics or collective action to address their problems or to give their lives meaning. Instead, they deal with the traumas of everyday life by crafting “deeply personal coming of age stories, grounding their adult identities in recovering from painful pasts — whether addictions, childhood abuse, family trauma, or abandonment — and forging an emancipated, transformed, adult self.”

In the language of C. Wright Mills, they lack a sociological imagination that allows them to connect personal troubles to public issues. The social damage wrought by deunionization, financialization, and deeply embedded patterns of gender and racial discrimination are consistently transmuted into evidence of personal shortcomings that, if left uncorrected, hold individuals back from attaining stability and security.

Though Polk is a child of the middle class who made a fortune before turning thirty, the narrative he crafts to explain his personal trajectory bears all of the same characteristics of Silva’s working-class interviewees. He traces all of his personal shortcomings — his youthful drug and alcohol abuse, low-level criminal activity, workplace fistfights, sexual infidelities, and lust for money — to childhood trauma at the hands of an abusive father. In his telling, it is individuals with propensities toward addictive behaviors, not political actors or socioeconomic structures, that are responsible for the vast gulf between the rich and the rest of us. It was not revulsion at the vast social wreckage of neoliberalism, but rather the development of a “core sense of self” honed through years of therapy, that finally spurred his decision to leave Wall Street.

Like his counterparts occupying the lower rungs of an increasingly precarious labor market, Polk has made sense of his life and the world by creating individualized solutions to confront and transcend a traumatic past disconnected from any wider social context. This affective orientation is a generalized condition of neoliberal subjectivity across classes.

At the height of her reign, Margaret Thatcher declared that in the neoliberal counter-revolution, economics was the method but the object was to change the soul. Judging from the bleak emotional landscapes that so many of us seem to inhabit, that project has succeeded beyond its protagonists’ wildest dreams.

The appeal of individualistic and therapeutic approaches to the problems of our time is not difficult to apprehend. But it is only through the creation of solidarities that rebuild confidence in our collective capacity to change the world that their grip can be broken. Until then, the only thing that the Sam Polks of the world can offer us is a solitary bowl of chicken soup for our neoliberal souls.