Sunshine Recorder

Link: Forever Alone: Why Loneliness Matters in the Social Age

I got up and went over and looked out the window. I felt so lonesome, all of a sudden. I almost wished I was dead. Boy, did I feel rotten. I felt so damn lonesome. I just didn’t want to hang around any more. It made me too sad and lonesome.

— J.D. Salinger in Catcher in the Rye

Loneliness was a problem I experienced most poignantly in college. In the three years I spent at Carnegie Mellon, the crippling effects of loneliness slowly pecked away at my enthusiasm for learning and for life, until I was drowning in an endless depressive haze that never completely cleared until I left Pittsburgh.

It wasn’t for lack of trying either. At the warm behest of the orientation counselors, I joined just the right number of clubs, participated in most of the dorm activities, and tried to expand my social portfolio as much as possible.

None of it worked.

To the extent that I sought out CAPS (our student psych and counseling service) for help, the platitudes they offered as advice (“Just put yourself out there!”) only served to confirm my suspicion that loneliness isn’t a very visible problem. (After all, the cure for loneliness isn’t exactly something that could be prescribed. “Have you considered transferring?” they finally suggested, after exhausting their list of thought-terminating clichés. I graduated early instead.)

As prolonged loneliness took its toll, I became very unhappy—to put it lightly—and even in retrospect I have difficulty pinpointing a specific cause. It wasn’t that I didn’t know anyone or failed to make any friends, and it wasn’t that I was alonemore than I liked.

Sure, I could point my finger at the abysmally fickle weather patterns of Pittsburgh, or the pseudo-suburban bubble that envelops the campus. There might even be a correlation between my academic dissonance with computer science and my feelings of loneliness. I might also just be an extremely unlikable person.

For whatever the reason (or a confluence thereof) the reality remained that I struggled with loneliness throughout my time in college.

+++

I recall a conversation with my friend Dev one particular evening on the patio of our dormitory. It was the beginning of my junior and last year at CMU, and I had just finished throwing an ice cream party for the residents I oversaw as an RA.

“Glad to be back?” he asked as he plopped down on a lawn chair beside me.

“No, not really.”

The sun was setting, and any good feelings about the upcoming semester with it. We made small talk about the school in general, as he had recently transferred, but eventually Dev asked me if I was happy there.

“No, not really.”

“Why do you think you’re so miserable here?”

“I don’t know. A lot of things, I guess. But mostly because I feel lonely. Like I don’t belong, like I can’t relate to or connect with anyone on an emotional level. I haven’t made any quality relationships here that I would look back on with any fond memories. Fuck… I don’t know what to do.”

College, at least for me, was a harrowing exercise in how helplessly debilitating, hopelessly soul-crushing, and at times life-threatening loneliness could be. It’s a problem nobody talks about, and it’s been a subject of much personal relevance and interest.

Loneliness as a Health Problem

A recent article published on Slate outlines the hidden dangers of social isolation. Chronic loneliness, as Jessica Olien discovered, poses serious health risks that not only impact mental health but physiological well-being as well.

The lack of quality social relationships in a person’s life has been linked to an increased mortality risk comparable to smoking and alcohol consumption and exceeds the influence of other risk factors like physical inactivity and obesity. It’s hard to brush off loneliness as a character flaw or an ephemeral feeling when you realize it kills more people than obesity.

Research also shows that loneliness diminishes sleep quality and impairs physiological function, in some cases reducing immune function and boosting inflammation, which increases risk for diabetes and heart disease.

Why hasn’t loneliness gotten much attention as a medical problem? Olien shares the following observation:

As a culture we obsess over strategies to prevent obesity. We provide resources to help people quit smoking. But I have never had a doctor ask me how much meaningful social interaction I am getting. Even if a doctor did ask, it is not as though there is a prescription for meaningful social interaction.

As a society we look down upon those who admit to being lonely, we cast and ostracize them with labels like “loners” insofar as they prefer to hide behind shame and doubt rather than speak up. This dynamic only makes it harder to devise solutions to what is clearly a larger societal issue, and it certainly brings to question the effects of culture on our perception of loneliness as a problem.

Loneliness as a Culture Problem

Stephen Fry, in a blog post titled Only the Lonely which explains his suicide attempt last year, describes in detail his struggle with depression. His account offers a rare and candid glimpse into the reality of loneliness with which those afflicted often hide from the public:

Lonely? I get invitation cards through the post almost every day. I shall be in the Royal Box at Wimbledon and I have serious and generous offers from friends asking me to join them in the South of France, Italy, Sicily, South Africa, British Columbia and America this summer. I have two months to start a book before I go off to Broadway for a run of Twelfth Night there.

I can read back that last sentence and see that, bipolar or not, if I’m under treatment and not actually depressed, what the fuck right do I have to be lonely, unhappy or forlorn? I don’t have the right. But there again I don’t have the right not to have those feelings. Feelings are not something to which one does or does not have rights.

In the end loneliness is the most terrible and contradictory of my problems.

In the United States, approximately 60 million people, or 20% of the population, feel lonely. According to the General Social Survey, between 1985 and 2004, the number of people with whom the average American discusses important mattersdecreased from three to two, and the number with no one to discuss important matters with tripled.

Modernization has been cited as a reason for the intensification of loneliness in every society around the world, attributed to greater migration, smaller household sizes, and a larger degree of media consumption.

In Japan, loneliness is an even more pervasive, layered problem mired in cultural parochialisms. Gideon Lewis-Kraus pens a beautiful narrative on Harper’s in which he describes his foray into the world of Japanese co-sleeping cafés:

“Why do you think he came here, to the sleeping café?”

“He wanted five-second hug maybe because he had no one to hug. Japan ishaji culture. Shame. Is shame culture. Or maybe also is shyness. I don’t know why. Tokyo people … very alone. And he does not have … ” She thought for a second, shrugged, reached for her phone. “Please hold moment.”

She held it close to her face, multitouched the screen not with thumb and forefinger but with tiny forefinger and middle finger. I could hear another customer whispering in Japanese in the silk-walled cubicle at our feet. His co-sleeper laughed loudly, then laughed softly. Yukiko tapped a button and shone the phone at my face. The screen said COURAGE.

It took an enormous effort for me to come to terms with my losing battle with loneliness and the ensuing depression at CMU, and an even greater leap of faith to reach out for help. (That it was to no avail is another story altogether.) But what is even more disconcerting to me is that the general stigma against loneliness and mental health issues, hinging on an unhealthy stress culture, makes it hard for afflicted students to seek assistance at all.

As Olien puts it, “In a society that judges you based on how expansive your social networks appear, loneliness is difficult to fess up to. It feels shameful.”

To truly combat loneliness from a cultural angle, we need to start by examining our own fears about being alone and to recognize that as humans, loneliness is often symptomatic of our unfulfilled social needs. Most importantly, we need to accept that it’s okay to feel lonely. Fry, signing off on his heartfelt post, offers this insight:

Loneliness is not much written about (my spell-check wanted me to say that loveliness is not much written about—how wrong that is) but humankind is a social species and maybe it’s something we should think about more than we do.

Loneliness as a Technology Problem

Technology, and by extension media consumption in the Internet age, adds the most perplexing (and perhaps the most interesting) dimension to the loneliness problem. As it turns out, technology isn’t necessarily helping us feel more connected; in some cases, it makes loneliness worse.

The amount of time you spend on Facebook, as a recent study found, is inversely related to how happy you feel throughout the day.

Take a moment to watch this video.

It’s a powerful, sombering reminder that our growing dependence on technology to communicate has serious social repercussions, to which Cohen presents his central thesis:

We are lonely, but we’re afraid of intimacy, while the social networks offer us three gratifying fantasies: 1) That we can put our attention wherever we want it to be. 2) That we will always be heard. 3) That we will never have to be alone.

And that third idea, that we will never have to be alone, is central to changing our psyches. It’s shaping a new way of being. The best way to describe it is:

I share, therefore I am.

Public discourse on the cultural ramifications of technology is certainly not a recent development, and the general sentiment that our perverse obsession with sharing will be humanity’s downfall continues to echo in various forms around the web: articles proclaiming that Instagram is ruining people’s lives, the existence of a section on Reddit called cringepics where people congregate to ridicule things others post on the Internet, the increasing number of self-proclaimed “social media gurus” on Twitter, to name a few.

The signs seem to suggest we have reached a tipping point for “social” media that’s not very social on a personal level, but whether it means a catastrophic implosion or a gradual return to more authentic forms of interpersonal communications remains to be seen.

While technology has been a source of social isolation for many, it has the capacity to alleviate loneliness as well. A study funded by the online dating site eHarmony shows that couples who met online are less likely to divorce and achieve more marital satisfaction than those who met in real life.

The same model could potentially be applied to friendships, and it’s frustrating to see that there aren’t more startups leveraging this opportunity when the problem is so immediate and in need of solutions. It’s a matter of exposure and education on the truths of loneliness, and unfortunately we’re just not there yet.

+++

The perils of loneliness shouldn’t be overlooked in an increasingly hyperconnected world that often tells another story through rose-tinted lenses. Rather, the gravity of loneliness should be addressed and brought to light as a multifaceted problem, one often muted and stigmatized in our society. I learned firsthand how painfully real of a problem loneliness could be, and more should be done to spread its awareness and to help those affected.

“What do you think I should do?” I looked at Dev as the last traces of sunlight teetered over the top of Morewood Gardens. It was a rhetorical question—things weren’t about to get better.

“Find better people,” he replied.

I offered him a weak smile in return, but little did I know then how prescient those words were.

In the year that followed, I started a fraternity with some of the best kids I’d come to know (Dev included), graduated college and moved to San Francisco, made some of the best friends I’ve ever had, and never looked back, if only to remember, and remember well, that it’s never easy being lonely.

Link: Antibiotics, Capitalism and the Failure of the Market

Last March 2013, England’s Chief Medical Officer, Dame Sally Davies gave the stark warning that antimicrobial resistance poses “a catastrophic threat” Unless we act now, she argued, “any one of us could go into hospital in 20 years for minor surgery and die because of an ordinary infection that can’t be treated by antibiotics. And routine operations like hip replacements or organ transplants could be deadly because of the risk of infection.”[1]

Over billions of years, bacteria have encountered a multitude of naturally occurring antibiotics and consequentially developed resistance mechanisms to survive. The primary emergence of resistance is random, coming about by DNA mutation or gene exchange with other bacteria. However, the further use of antibiotics then favours the spread of those bacteria that have become resistant.

More than 70% of pathogenic bacteria that cause healthcare acquired infections are resistant to at least one the drugs most commonly used to treat them.[2][3] Increasing resistance in bacteria like Eschericha coli (E. coli) is a growing public health concern due to the very limited therapy options for infections caused by E. coli. This is particularly so in E .coli that is resistant to carbapenem antibiotics, the drugs of last resort.

The emergence of resistance is complex issue involving inappropriate and over use of antimicrobials in humans and animals. Antibiotics may be administered by health professionals or farmers when they are not required or patients may take only part of a full course of treatment. This provides bacteria the opportunity to encounter the otherwise life-saving drugs, at ineffective levels and survive mutation to produce resistant strains. Once created, these resistant strains have been allowed to spread by poor infection control and regional surveillance procedures.

These two problems are easily solved by educating healthcare professionals, patients and animal keepers about the importance of antibiotic treatment regimens and keeping to them. Advocating good infection control procedures in hospitals and investment in surveillance programs monitoring patterns of resistance locally and across the country would reduce the spread of infection. However, the biggest problem is capitalism and the fact that there is not a supply of new antimicrobials.

Between 1929 and the 1970s pharmaceutical companies developed more than twenty new classes of antimicrobials.[4][5] Since the 1970s only two new categories of antimicrobials have arrived.[6][7] Today the pipeline for new antibiotic classes active against highly resistant Gram negative bacteria is dry [8][9][10] the only novel category in early clinical development has recently been withdrawn.[9][11]

For the last seventy years the human race has kept itself ahead of resistant bacteria by going back into the laboratory and developing the next generation of antimicrobials. However, due to a failure of the market, pharmaceutical companies are no longer interested in developing antibiotics.

Despite the warnings from Dame Sally Davies, drug companies have pulled back from antimicrobial research because there is no profit to be made from it. When used appropriately a single £100 course of antibiotics will save someone’s life. However, that clinical effectiveness and short-term use has the unfortunate consequence of making antimicrobials significantly less profitable than the pharmaceuticals used in cancer therapy, which can cost £20,000 per year.

In our current system, a drug company’s return on their financial investment in antimicrobials is dependent on their volume of sales. A further problem arises when we factor in the educational programs aimed at teaching healthcare professionals and animal keepers to limit their use of antimicrobials. This combined with the relative unprofitability has produced a failure in the market and a paradox for capitalism.

A response commonly proposed by my fellow scientists, is that our government must provide incentives for pharmaceutical companies to develop new antimicrobial drugs. Suggestions are primarily focused around reducing the financial risk for drugs companies and include grants, prizes, tax breaks, creating public-private partnerships and increasing intellectual property protections. Further suggestions are often related to removing “red tape” and streamlining the drug approval and clinical trial requirements.

In September 2013 the Department of Health published its UK Five Year Antimicrobial Resistance Strategy.[12] The document called for “work to reform and harmonise regulatory regimes relating to the licencing and approval of antibiotics”, better collaboration “encouraging greater public-private investment in the discovery and development of a sustainable supply of effective new antimicrobials” and states that “Industry has a corporate and social responsibility to contribute to work to tackle antimicrobial resistance.”

I think we should have three major objections to these statements. One, the managers in the pharmaceutical industry do not have any responsibility to contribute to work to tackle antimicrobial resistance. They have a responsibility to practice within the law or be fined and make profit for shareholders or be replaced. It is the state that has the responsibility for the protection and wellbeing of its citizens.

Secondly, following this year’s horsemeat scandal we should object to companies cutting corners in attempt to increase profits. This leads on to the final objection, that by promoting public-private collaboration all the state is doing, is subsidising share holder profits by reducing the shareholder’s financial risk.

The market has failed and novel antimicrobials will require investment not based on a financial return from the volume of antibiotics sold but on the benefit for society of being free from disease.

John Maynard Keynes in his 1924, Sydney Ball Foundation Lecture at Cambridge, said “the important thing for government is not to do things which individuals are doing already, and to do them a little better or a little worse; but to do those things which at present are not done at all”.[13] Mariana Mazzucato in her 2013 book, The Entrepreneurial State, discusses how the state can lead innovation and criticises the risk and reward relationships in current public-private partnerships.[14] Mazzacuto argues that the state can be entrepreneurial and inventive and that we need to reinvent the state and government.

This praise of the potential of the state seems to be supported by the public as following announcements of energy price rises, in October 2013, a YouGov poll found that 12 to 1 people were against the NHS being run by the private sector; 67% in favour of Royal Mail being run in the public sector; 66% want railway companies to be nationalised and 68% are in favour of nationalised energy companies.[15]

We should support state funded professors, post-doctoral researchers and PhD students as scientists working within the public sector. They could study the mechanisms of drug entry into bacterial cells or screen natural antibiotic compounds. This could not be done on a shoestring budget and it would no doubt take years to build the infrastructure but we could do things like make the case for where the research took place.

Andrew Witty’s recent review of higher education and regional growth asked universities to become more involved in their local economies.[16] The state could choose to build laboratories in geographical areas neglected by private sector investment and help promote regional recovery. Even more radically, if novel antibiotics are produced for their social good rather than financial gain, they can be reserved indefinitely until a time of crisis.

With regard to democracy, patients and the general public could have a greater say in what is researched and to help shift us away from our reliance on the market to provide what society needs.  The market responds, not to what society needs, but to what will create the most profit. This is a reoccurring theme throughout science. I cannot begin to tell you how frequently I listen to case studies regarding parasites which only affect people in the developing world. Again, the people of the developing world have very little money so drug companies neglect to develop drugs as there is no source of profit. We should make the case for innovation not to be driven by greed but for the service of society and even our species.

Before Friedrich Hayek, John Desmond Bernal in his 1939 book, The Social Function to Science, argued for more spending on innovation as science was not merely an abstract intellectual enquiry but of real practical value.[17] Bernal placed science and technology as one of the driving forces of history. Why should we not follow that path?

Link: The War on Drugs Is Over. Drugs Won.

The world’s most extensive study of the drug trade has just been published in the medical journal BMJ Open, providing the first “global snapshot” of four decades of the war on drugs. You can already guess the result. The war on drugs could not have been a bigger failure. To sum up their most important findings, the average purity of heroin and cocaine have increased, respectively, 60 percent and 11 percent between 1990 and 2007. Cannabis purity is up a whopping 161 percent over that same time. Not only are drugs way purer than ever, they’re also way, way cheaper. Coke is on an 80 percent discount from 1990, heroin 81 percent, cannabis 86 percent. After a trillion dollars spent on the drug war, now is the greatest time in history to get high.

The new study only confirms what has been well-established for a decade at least, that trying to attack the drug supply is more or less pointless. The real question is demand, trying to mitigate its disastrous social consequences and treating the desire for drugs as a medical condition rather than as a moral failure. 

But there’s another question about demand that the research from BMJ Open poses. Why is there so much of it? No drug dealer ever worries about demand. Ever. The hunger for illegal drugs in America is assumed to be limitless. Why? One answer is that drugs feed a human despair that is equally limitless. And there is plenty of despair, no doubt. But the question becomes more complicated when you consider how many people are drugging themselves legally. In 2010 the CDC found that 48 percent of Americans used prescription drugs, 31 percent were taking two or more, and 11 percent were taking five or more. Two of the most common prescription drugs were stimulants, for adolescents, and anti-depressants, for middle-aged Americans.

Both the legal and illegal alteration of consciousness is at an all-time high. And it is quickly accelerating. One of the more interesting books published in the past year is Daniel Lieberman’s The Story of the Human Body: Evolution, Health, and Disease. It is a fascinating study by the chair of the department of human evolutionary biology at Harvard of how our Paleolithic natures, set in a hypermodern reality, are failing to adjust. His conclusions on the future of the species are somewhat dark:    

"We didn’t evolve to be healthy, but instead we were selected to have as many offspring as possible under diverse, challenging conditions. As a consequence, we never evolved to make rational choices about what to eat or how to exercise in conditions of abundance and comfort. What’s more, interactions between the bodies we inherited, the environments we create, and the decisions we sometimes make have set in motion an insidious feedback loop. We get sick from chronic diseases by doing what we evolved to do but under conditions for which our bodies are poorly adapted, and we then pass on those same conditions to our children, who also then get sick."

Our psychological reality is equally unadjusted to the world we live in. Cortisol levels — the stress hormone — evolved to increase during moments of crisis, like when a lion attacks. If you live in a city, your cortisol levels are constantly elevated. You’re always being chased. We are not built for that reality.

Lieberman’s solution is that we “respectfully and sensibly nudge, push, and sometimes oblige ourselves” to make healthier decisions, to live more in keeping with our biology and to adapt to the modern world with sensible, rational limits. But the mass demand for drugs — the boundless need to opiate and numb ourselves — shows that the simpler solution remains, and will no doubt remain, much more popular. Just take something.

Link: How I'm Going to Commit Suicide

A shockingly honest (and beautifully elegant) confession by Britain’s most celebrated art critic, Brian Sewell.

Every night I swallow a handful of pills. In the morning and during the day I swallow others,  haphazardly, for I am not always in the right place at the right time, but at night there is a ritual.

I undress. I clean my teeth. I wipe the mirror clear of splashes and see with some distaste the reflection of my decaying body, wondering that it ever had the impertinence to indulge in the pleasures of the flesh.

And then I take the pills. Some are for a heart that too often makes me feel that I have a misfiring single-cylinder diesel engine in my rib-cage.

Others are for the ordinary afflictions of age and still others ease the aches of old bones that creak and crunch. All in their way are poisons – that they do no harm is only a matter of dosage.

I intend, one day, to take an overdose. Not yet, for the experts at that friendly and understanding hospital, the Brompton in Kensington, manage my heart condition very well.

But the bone-rot will reach a point – not beyond endurance but beyond my willingness to endure it – when drugs prescribed to numb the pain so affect the functions of my brain that all the pleasures of music, art and books are dulled, and I merely exist.

An old buffer in a chair, sleeping and waking, sleeping and waking.

The thought of suicide is a great comfort, for it is what I shall employ if mere existence is ever all that I have. The difficulty will be that I must have the wit to identify the time, the weeks, the days, even  the critical moment (for it will not be long) between my recognising the need to end my life and the loss of my physical ability to carry out the plan.

There is a plan. I know exactly what I want to do and where I want to do it – not at home, not in my own bed. I shall write a note addressed ‘To whom it may concern’ explaining that I am committing suicide, that I am in sound mind, that no one else has been involved and, if I am discovered before my heart has stopped, I do not want to be resuscitated.

With this note in my pocket, I shall leave the house and totter off to a bench – foolishly installed by the local authority on a road so heavy with traffic that no one ever sits there – make myself comfortable and down as many pills as I can with a bottle of Bombay Gin, the only spirit that I like, to send them on their way.

With luck, no one will notice me for hours – and if they do, will think me an old drunk. Some unfortunate athlete will find me, stiff with rigor, on his morning jog.

I have left my cadaver to a teaching hospital for the use and abuse of medical students – and my sole misgiving is that, having filled it with poisons, I may have rendered it useless.

There are those who damn the suicide for invading the prerogative of the Almighty. Many years, however, have passed since I abandoned the beliefs, observances and irrational prejudices of Christianity, and I have no moral or religious inhibitions against suicide.

I cherish the notion of dying easily and with my wits about me. I am 82 tomorrow and do not want to die a dribbling dotard waiting for the Queen’s congratulatory greeting in 2031.

Nor do I wish to cling to an increasingly wretched life made unconscionable misery by acute or chronic pain and the humiliations of nursing.

What virtue can there be in suffering, in impotent wretchedness, in the bedpans and pisspots, the feeding with a spoon, the baby talk, the dwindling mind and the senses slipping in and out of consciousness?

For those so affected, dying is a prolonged and degrading misadventure. ‘We can ease the pain,’ says another of this interregnum between life and death. But what of those who want to hurry on?

Then the theologian argues that a man must not play God and determine his own end and prates of the purification of the soul through suffering and pain.

But what if the dying man is atheist or agnostic or has lost his faith – must he suffer life longer because of the prejudice of a Christian theologian? And has it occurred to no theologian that God himself might inspire the thought of suicide – or is that too great a heresy?

Link: The Obesity Era

As the American people got fatter, so did marmosets, vervet monkeys and mice. The problem may be bigger than any of us. 

Years ago, after a plane trip spent reading Fyodor Dostoyevsky’s Notes from the Underground and Weight Watchers magazine, Woody Allen melded the two experiences into a single essay. ‘I am fat,’ it began. ‘I am disgustingly fat. I am the fattest human I know. I have nothing but excess poundage all over my body. My fingers are fat. My wrists are fat. My eyes are fat. (Can you imagine fat eyes?).’ It was 1968, when most of the world’s people were more or less ‘height-weight proportional’ and millions of the rest were starving. Weight Watchers was a new organisation for an exotic new problem. The notion that being fat could spur Russian-novel anguish was good for a laugh.

That, as we used to say during my Californian adolescence, was then. Now, 1968’s joke has become 2013’s truism. For the first time in human history, overweight people outnumber the underfed, and obesity is widespread in wealthy and poor nations alike. The diseases that obesity makes more likely — diabetes, heart ailments, strokes, kidney failure — are rising fast across the world, and the World Health Organisation predicts that they will be the leading causes of death inall countries, even the poorest, within a couple of years. What’s more, the long-term illnesses of the overweight are far more expensive to treat than the infections and accidents for which modern health systems were designed. Obesity threatens individuals with long twilight years of sickness, and health-care systems with bankruptcy.

And so the authorities tell us, ever more loudly, that we are fat — disgustingly, world-threateningly fat. We must take ourselves in hand and address our weakness. After all, it’s obvious who is to blame for this frightening global blanket of lipids: it’s us, choosing over and over again, billions of times a day, to eat too much and exercise too little. What else could it be? If you’re overweight, it must be because you are not saying no to sweets and fast food and fried potatoes. It’s because you take elevators and cars and golf carts where your forebears nobly strained their thighs and calves. How could you dothis to yourself, and to society?

Moral panic about the depravity of the heavy has seeped into many aspects of life, confusing even the erudite. Earlier this month, for example, the American evolutionary psychologist Geoffrey Miller expressed the zeitgeist in this tweet: ‘Dear obese PhD applicants: if you don’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation. #truth.’ Businesses are moving to profit on the supposed weaknesses of their customers. Meanwhile, governments no longer presume that their citizens know what they are doing when they take up a menu or a shopping cart. Yesterday’s fringe notions are becoming today’s rules for living — such as New York City’s recent attempt to ban large-size cups for sugary soft drinks, or Denmark’s short-lived tax surcharge on foods that contain more than 2.3 per cent saturated fat, or Samoa Air’s 2013 ticket policy, in which a passenger’s fare is based on his weight because: ‘You are the master of your air ‘fair’, you decide how much (or how little) your ticket will cost.’

Several governments now sponsor jauntily named pro-exercise programmes such as Let’s Move! (US), Change4Life (UK) and actionsanté (Switzerland). Less chummy approaches are spreading, too. Since 2008, Japanese law requires companies to measure and report the waist circumference of all employees between the ages of 40 and 74 so that, among other things, anyone over the recommended girth can receive an email of admonition and advice.

Hand-in-glove with the authorities that promote self-scrutiny are the businesses that sell it, in the form of weight-loss foods, medicines, services, surgeries and new technologies. A Hong Kong company named Hapilabs offers an electronic fork that tracks how many bites you take per minute in order to prevent hasty eating: shovel food in too fast and it vibrates to alert you. A report by the consulting firm McKinsey & Co predicted in May 2012 that ‘health and wellness’ would soon become a trillion-dollar global industry. ‘Obesity is expensive in terms of health-care costs,’ it said before adding, with a consultantly chuckle, ‘dealing with it is also a big, fat market.’

And so we appear to have a public consensus that excess body weight (defined as a Body Mass Index of 25 or above) and obesity (BMI of 30 or above) are consequences of individual choice. It is undoubtedly true that societies are spending vast amounts of time and money on this idea. It is also true that the masters of the universe in business and government seem attracted to it, perhaps because stern self-discipline is how many of them attained their status. What we don’t know is whether the theory is actually correct.

Of course, that’s not the impression you will get from the admonishments of public-health agencies and wellness businesses. They are quick to assure us that ‘science says’ obesity is caused by individual choices about food and exercise. As the Mayor of New York, Michael Bloomberg, recently put it, defending his proposed ban on large cups for sugary drinks: ‘If you want to lose weight, don’t eat. This is not medicine, it’s thermodynamics. If you take in more than you use, you store it.’ (Got that? It’s not complicated medicine, it’s simple physics, the most sciencey science of all.)

Yet the scientists who study the biochemistry of fat and the epidemiologists who track weight trends are not nearly as unanimous as Bloomberg makes out. In fact, many researchers believe that personal gluttony and laziness cannot be the entire explanation for humanity’s global weight gain. Which means, of course, that they think at least some of the official focus on personal conduct is a waste of time and money. As Richard L Atkinson, Emeritus Professor of Medicine and Nutritional Sciences at the University of Wisconsin and editor of the International Journal of Obesity, put it in 2005: ‘The previous belief of many lay people and health professionals that obesity is simply the result of a lack of willpower and an inability to discipline eating habits is no longer defensible.’

Link: The Lethality of Loneliness

For the first time in history, we understand how isolation can ravage the body and brain. Now, what should we do about it?

Sometime in the late ’50s, Frieda Fromm-Reichmann sat down to write an essay about a subject that had been mostly overlooked by other psychoanalysts up to that point. Even Freud had only touched on it in passing. She was not sure, she wrote, “what inner forces” made her struggle with the problem of loneliness, though she had a notion. It might have been the young female catatonic patient who began to communicate only when Fromm-Reichmann asked her how lonely she was. “She raised her hand with her thumb lifted, the other four fingers bent toward her palm,” Fromm-Reichmann wrote. The thumb stood alone, “isolated from the four hidden fingers.” Fromm-Reichmann responded gently, “That lonely?” And at that, the woman’s “facial expression loosened up as though in great relief and gratitude, and her fingers opened.”

Fromm-Reichmann would later become world-famous as the dumpy little therapist mistaken for a housekeeper by a new patient, a severely disturbed schizophrenic girl named Joanne Greenberg. Fromm-Reichmann cured Greenberg, who had been deemed incurable. Greenberg left the hospital, went to college, became a writer, and immortalized her beloved analyst as “Dr. Fried” in the best-selling autobiographicalnovel I Never Promised You a Rose Garden (later also a movie and a pop song). Among analysts, Fromm-Reichmann, who had come to the United States from Germany to escape Hitler, was known for insisting that no patient was too sick to be healed through trust and intimacy. She figured that loneliness lay at the heart of nearly all mental illness and that the lonely person was just about the most terrifying spectacle in the world. She once chastised her fellow therapists for withdrawing from emotionally unreachable patients rather than risk being contaminated by them. The uncanny specter of loneliness “touches on our own possibility of loneliness,” she said. “We evade it and feel guilty.”

Her 1959 essay, “On Loneliness,” is considered a founding document in a fast-growing area of scientific research you might call loneliness studies. Over the past half-century, academic psychologists have largely abandoned psychoanalysis and made themselves over as biologists. And as they delve deeper into the workings of cells and nerves, they are confirming that loneliness is as monstrous as Fromm-Reichmann said it was. It has now been linked with a wide array of bodily ailments as well as the old mental ones.

In a way, these discoveries are as consequential as the germ theory of disease. Just as we once knew that infectious diseases killed, but didn’t know that germs spread them, we’ve known intuitively that loneliness hastens death, but haven’t been able to explain how. Psychobiologists can now show that loneliness sends misleading hormonal signals, rejiggers the molecules on genes that govern behavior, and wrenches a slew of other systems out of whack. They have proved that long-lasting loneliness not only makes you sick; it can kill you. Emotional isolation is ranked as high a risk factor for mortality as smoking. A partial list of the physical diseases thought to be caused or exacerbated by loneliness would include Alzheimer’s, obesity, diabetes, high blood pressure, heart disease, neurodegenerative diseases, and even cancer—tumors can metastasize faster in lonely people.

The psychological definition of loneliness hasn’t changed much since Fromm-Reichmann laid it out. “Real loneliness,” as she called it, is not what the philosopher Søren Kierkegaard characterized as the “shut-upness” and solitariness of the civilized. Nor is “real loneliness” the happy solitude of the productive artist or the passing irritation of being cooped up with the flu while all your friends go off on some adventure. It’s not being dissatisfied with your companion of the moment—your friend or lover or even spouse— unless you chronically find yourself in that situation, in which case you may in fact be a lonely person. Fromm-Reichmann even distinguished “real loneliness” from mourning, since the well-adjusted eventually get over that, and from depression, which may be a symptom of loneliness but is rarely the cause. Loneliness, she said—and this will surprise no one—is the want of intimacy.

Today’s psychologists accept Fromm-Reichmann’s inventory of all the things that loneliness isn’t and add a wrinkle she would surely have approved of. They insist that loneliness must be seen as an interior, subjective experience, not an external, objective condition. Loneliness “is not synonymous with being alone, nor does being with others guarantee protection from feelings of loneliness,” writes John Cacioppo, the leading psychologist on the subject. Cacioppo privileges the emotion over the social fact because—remarkably—he’s sure that it’s the feeling that wreaks havoc on the body and brain. Not everyone agrees with him, of course. Another school of thought insists that loneliness is a failure of social networks. The lonely get sicker than the non-lonely, because they don’t have people to take care of them; they don’t have social support.

To the degree that loneliness has been treated as a matter of public concern in the past, it has generally been seen as a social problem—the product of an excessively conformist culture or of a breakdown in social norms. Nowadays, though, loneliness is a public health crisis. The standard U.S. questionnaire, the UCLA Loneliness Scale, asks 20 questions that run variations on the theme of closeness—“How often do you feel close to people?” and so on. As many as 30 percent of Americans don’t feel close to people at a given time.

Loneliness varies with age and poses a particular threat to the very old, quickening the rate at which their faculties decline and cutting their lives shorter. But even among the not-so-old, loneliness is pervasive. In a survey published by the AARP in 2010, slightly more than one out of three adults 45 and over reported being chronically lonely (meaning they’ve been lonely for a long time). A decade earlier, only one out of five said that. With baby-boomers reaching retirement age at a rate of 10,000 a day, the number of lonely Americans will surely spike.

Obviously, the sicker lonely people get, the more care they’ll need. This is true, and alarming, although as we learn more about loneliness, we’ll also be better able to treat it. But to me, what’s most momentous about the new biology of loneliness is that it offers concrete proof, obtained through the best empirical means, that the poets and bluesmen and movie directors who for centuries have deplored the ravages of lonesomeness on both body and soul were right all along. As W. H. Auden put it, “We must love one another or die.”

Link: Caring on Stolen Time: A Nursing Home Diary

I work in a place of death. People come here to die, and my co-workers and I care for them as they make their journeys. Sometimes these transitions take years or months. Other times, they take weeks or some short days. I count the time in shifts, in scheduled state visits, in the sham monthly meetings I never attend, in the announcements of the “Employee of the Month” (code word for best ass-kisser of the month), in the yearly pay increment of 20 cents per hour, and in the number of times I get called into the Human Resources office.

The nursing home residents also have their own rhythms. Their time is tracked by scheduled hospital visits; by the times when loved ones drop by to share a meal, to announce the arrival of a new grandchild, or to wait anxiously at their bedsides for heart-wrenching moments to pass. Their time is measured by transitions from processed food to pureed food, textures that match their increasing susceptibility to dysphagia. Their transitions are also measured by the changes from underwear to pull-ups and then to diapers. Even more than the loss of mobility, the use of diapers is often the most dreaded adaptation. For many people, lack of control over urinary functions and timing is the definitive mark of the loss of independence.

Many of the elderly I have worked with are, at least initially, aware of the transitions and respond with a myriad of emotions from shame and anger to depression, anxiety, and fear. Theirs was the generation that survived the Great Depression and fought the last “good war.” Aging was an anti-climactic twist to the purported grandeur and tumultuousness of their mid-twentieth-century youth.

“I am afraid to die. I don’t know where I will go,” a resident named Lara says to me, fear dilating her eyes.

“Lara, you will go to heaven. You will be happy,” I reply, holding the spoonful of pureed spinach to her lips. “Tell me about your son, Tobias.”

And so Lara begins, the same story of Tobias, of his obedience and intelligence, which I have heard over and over again for the past year. The son whom she loves, whose teenage portrait stands by her bedside. The son who has never visited, but whose name and memory calm Lara.

Lara is always on the lookout, especially for Alba and Mary, the two women with severe dementia who sit on both sides of her in the dining room. To find out if Alba is enjoying her meal, she will look to my co-worker Saskia to ask, “Is she eating? If she doesn’t want to, don’t force her to eat. She will eat when she is hungry.” Alba, always cheerful, smiles. Does she understand? Or is she in her usual upbeat mood? “Lara, Alba’s fine. With you watching out for her, of course she’s OK!” We giggle. These are small moments to be cherished.

In the nursing home, such moments are precious because they are accidental moments.

The residents run on stolen time. Alind, like me, a certified nursing assistant (CNA), comments, “Some of these residents are already dead before they come here.”

By “dead,” he is not referring to the degenerative effects of dementia and Alzheimer’s disease but to the sense of hopelessness and loneliness that many of the residents feel, not just because of physical pain, not just because of old age, but as a result of the isolation, the abandonment by loved ones, the anger of being caged within the walls of this institution. This banishment is hardly the ending they toiled for during their industrious youth.

By death, Alind was also referring to the many times “I’m sorry,” is uttered in embarrassment and the tearful shrieks of shame that sometimes follow when they soil their clothes. This is the dying to which we, nursing home workers, bear witness every day; the death that the home is expected, somehow, to reverse.

So management tries, through bowling, through bingo and checkers, through Frank Sinatra sing-a-longs, to resurrect what has been lost to time, migration, the exigencies of the market, and the capriciousness of life. They substitute hot tea and cookies with strangers for the warmth of family and friends. Loved ones occupied by the same patterns of migration, work, ambition, ease their worries and guilt with pictures and reports of their relatives in these settings. We, the CNAs, shuffle in and out of these staged moments, to carry the residents off for toileting. The music playing in the building’s only bright and airy room is not for us, the immigrants, the lower hands, to plan for or share with the residents. Ours is a labor confined to the bathroom, to the involuntary, lower functions of the body. Instead of people of color in uniformed scrubs, white women with pretty clothes are paid more to care for the leisure-time activities of the old white people. The monotony and stress of our tasks are ours to bear alone.

The nursing home bosses freeze the occasional, carefully selected, picture-perfect moments on the front pages of their brochures, exclaiming that their facility, one of a group of Catholic homes is, indeed, a place where ”life is appreciated,” where “we care for the dignity of the human person.” In reality, they have not tried to make that possible. Under poor conditions, we have improvised for genuine human connection to exist. How we do that the bosses do not understand.

Link: Hands Off

Why are a bunch of men quitting masturbation? So they can be better men.

Traditionally, people undergo a bit of self-examination when faced with a ­potentially fatal rupture in their long-term relationship. Thirty-two-year-old Henry* admits that what he did was a little more extreme. “If you’d told me that I wasn’t going to masturbate for 54 days, I would have told you to fuck off,” he says.

Masturbation had been part of Henry’s daily routine since childhood. Although he remembered a scandalized babysitter who “found me trying to have sex with a chair” at age 5, Henry says he never felt shame about his habit. While he was of the opinion that a man who has a committed sexual relationship with porn was probably not going to have as successful a relationship with a woman, he had no qualms about watching it. Which he did most days.

Then, early last year and shortly before his girlfriend of two years moved to Los Angeles, Henry happened to watch a TED talk by the psychologist Philip Zimbardo called “The Demise of Guys.” It described males who “prefer the asynchronistic Internet world to the spontaneous interactions in social relationships” and therefore fail to succeed in school, work, and with women. When his girlfriend left, Henry went on to watch a TEDX talk by Gary Wilson, an anatomist and physiologist, whose lecture series, “Your Brain on Porn,” claims, among other things, that porn conditions men to want constant variety—an endless set of images and fantasies—and requires them to experience increasingly heightened stimuli to feel aroused. A related link led Henry to a community of people engaged in attempts to quit masturbation on the social news site Reddit. After reading the ­enthusiastic posts claiming improved virility, Henry began frequenting the site.

“The main thing was seeing people who said, ‘I feel awesome,’ ” he says. Henry did not feel awesome. He felt burned out from work and physically exhausted, and his girlfriend had just moved across the country. He had a few sexual concerns, too, though nothing serious, he insists. In his twenties, he sometimes had difficulty ejaculating during one-night stands if he had been drinking. On two separate occasions, he had not been able to get an erection. He wasn’t sure that forswearing masturbation would solve any of this, but stopping for a while seemed like “a not-difficult experiment”—far easier than giving up other things people try to quit, like caffeine or alcohol.

He also felt some responsibility for what had happened to his relationship. “When a guy feels like he’s failed with respect to a woman, that’s one of the things that causes you to examine yourself.” If he had been a better boyfriend or even a better man, he thought, perhaps his girlfriend wouldn’t have left New York.

So a month after his girlfriend moved away, and a few weeks before taking a trip to visit her, Henry went to the gym a lot. He had meditated for years, but he began to do so with more discipline and intention. He researched strategies to relieve insomnia, to avoid procrastination, and to be more conscious of his daily habits. These changes were not only for his girlfriend. “It was about cultivating a masculine energy that I wanted to apply in other parts of my life and with her,” he says.

And to help cultivate that masculine energy, he decided to quit masturbating. He erased a corner of the white board in his home office and started a tally of days, always using Roman numerals. “That way,” he says, “it would mean more.”

For those who seek fulfillment in the renunciation of benign habits, masturbation isn’t usually high on the list. It’s variously a privilege, a right, an act of political assertion, or one of the purest and most inconsequential pleasures that exist. Doctors assert that it’s healthy. Therapists recommend it. (Henry once talked to his therapist after a bad sexual encounter; she told him to masturbate. “Love yourself,” she said.)

And despite a century passing since Freud declared auto­eroticism a healthy phase of childhood sexual development and Egon Schiele drew pictures of people touching themselves, masturbation has become the latest frontier in the school of self-improvement. Today’s anti-masturbation advocates deviate from anti-onanists past—that superannuated medley of Catholic ascetics, boxers, Jean-Jacques Rousseau, and Norman Mailer. Instead, the members of the current generation tend to be young, self-aware, and secular. They bolster their convictions online by quoting studies indicating that ejaculation leads to decreased testosterone and vitamin levels (a drop in zinc, specifically). They cull evidence implying that excessive porn-viewing can reduce the number of dopamine receptors. Even the occasional woman can be found quitting (although some women partake of a culture of encouragement around masturbation, everything from a direct-sales sex-toy party at a friend’s house to classes with sex educator Betty Dodson, author of Sex for One).

Link: Why an MRI costs $1,080 in the US & $280 in France

There is a simple reason health care in the United States costs more than it does anywhere else: The prices are higher.

That may sound obvious. But it is, in fact, key to understanding one of the most pressing problems facing our economy. In 2009, Americans spent $7,960 per person on health care. Our neighbors in Canada spent $4,808. The Germans spent $4,218. The French, $3,978. If we had the per-person costs of any of those countries, America’s deficits would vanish. Workers would have much more money in their pockets. Our economy would grow more quickly, as our exports would be more competitive.

There are many possible explanations for why Americans pay so much more. It could be that we’re sicker. Or that we go to the doctor more frequently. But health researchers have largely discarded these theories. As Gerard Anderson, Uwe Reinhardt, Peter Hussey and Varduhi Petrosyan put it in the title of their influential 2003 study on international health-care costs, “it’s the prices, stupid.”

As it’s difficult to get good data on prices, that paper blamed prices largely by eliminating the other possible culprits. They authors considered, for instance, the idea that Americans were simply using more health-care services, but on close inspection, found that Americans don’t see the doctor more often or stay longer in the hospital than residents of other countries. Quite the opposite, actually. We spend less time in the hospital than Germans and see the doctor less often than the Canadians.

“The United States spends more on health care than any of the other OECD countries spend, without providing more services than the other countries do,” they concluded. “This suggests that the difference in spending is mostly attributable to higher prices of goods and services.”

On Friday, the International Federation of Health Plans — a global insurance trade association that includes more than 100 insurers in 25 countries — released more direct evidence. It surveyed its members on the prices paid for 23 medical services and products in different countries, asking after everything from a routine doctor’s visit to a dose of Lipitor to coronary bypass surgery. And in 22 of 23 cases, Americans are paying higher prices than residents of other developed countries. Usually, we’re paying quite a bit more. The exception is cataract surgery, which appears to be costlier in Switzerland, though cheaper everywhere else.

Prices don’t explain all of the difference between America and other countries. But they do explain a big chunk of it. The question, of course, is why Americans pay such high prices — and why we haven’t done anything about it.

“Other countries negotiate very aggressively with the providers and set rates that are much lower than we do,” Anderson says. They do this in one of two ways. In countries such as Canada and Britain, prices are set by the government. In others, such as Germany and Japan, they’re set by providers and insurers sitting in a room and coming to an agreement, with the government stepping in to set prices if they fail.

Health care is an unusual product in that it is difficult, and sometimes impossible, for the customer to say “no.” In certain cases, the customer is passed out, or otherwise incapable of making decisions about her care, and the decisions are made by providers whose mandate is, correctly, to save lives rather than money.

In America, Medicare and Medicaid negotiate prices on behalf of their tens of millions of members and, not coincidentally, purchase care at a substantial markdown from the commercial average. But outside that, it’s a free-for-all. Providers largely charge what they can get away with, often offering different prices to different insurers, and an even higher price to the uninsured.

In other cases, there is more time for loved ones to consider costs, but little emotional space to do so — no one wants to think there was something more they could have done to save their parent or child. It is not like buying a television, where you can easily comparison shop and walk out of the store, and even forgo the purchase if it’s too expensive. And imagine what you would pay for a television if the salesmen at Best Buy knew that you couldn’t leave without making a purchase.

“In my view, health is a business in the United States in quite a different way than it is elsewhere,” says Tom Sackville, who served in Margaret Thatcher’s government and now directs the IFHP. “It’s very much something people make money out of. There isn’t too much embarrassment about that compared to Europe and elsewhere.”

The result is that, unlike in other countries, sellers of health-care services in America have considerable power to set prices, and so they set them quite high. Two of the five most profitable industries in the United States — the pharmaceuticals industry and the medical device industry — sell health care. With margins of almost 20 percent, they beat out even the financial sector for sheer profitability.

Link: The Extraordinary Science of Addictive Junk Food

The public and the food companies have known for decades now — or at the very least since this meeting — that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.

Link: Scott and Scurvy

How the cure for scurvy, discovered in 1747, had been forgotten by the time of Scott’s expedition to the Antarctic in 1911.

Recently I have been re-reading one of my favorite books, The Worst Journey in the World, an account of Robert Falcon Scott’s 1911 expedition to the South Pole. I can’t do the book justice in a summary, other than recommend that you drop everything and read it, but there is one detail that particularly baffled me the first time through, and that I resolved to understand better once I could stand to put the book down long enough.

Writing about the first winter the men spent on the ice, Cherry-Garrard casually mentions an astonishing lecture on scurvy by one of the expedition’s doctors:

Atkinson inclined to Almroth Wright’s theory that scurvy is due to an acid intoxication of the blood caused by bacteria…
There was little scurvy in Nelson’s days; but the reason is not clear, since, according to modern research, lime-juice only helps to prevent it. We had, at Cape Evans, a salt of sodium to be used to alkalize the blood as an experiment, if necessity arose. Darkness, cold, and hard work are in Atkinson’s opinion important causes of scurvy.

Now, I had been taught in school that scurvy had been conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. From that point on, we were told, the Royal Navy had required a daily dose of lime juice to be mixed in with sailors’grog, and scurvy ceased to be a problem on long ocean voyages.

But here was a Royal Navy surgeon in 1911 apparently ignorant of what caused the disease, or how to cure it. Somehow a highly-trained group of scientists at the start of the 20th century knew less about scurvy than the average sea captain in Napoleonic times. Scott left a base abundantly stocked with fresh meat, fruits, apples, and lime juice, and headed out on the ice for five months with no protection against scurvy, all the while confident he was not at risk. What happened?

By all accounts scurvy is a horrible disease. Scott, who has reason to know, gives a succinct description:

The symptoms of scurvy do not necessarily occur in a regular order, but generally the first sign is an inflamed, swollen condition of the gums. The whitish pink tinge next the teeth is replaced by an angry red; as the disease gains ground the gums become more spongy and turn to a purplish colour, the teeth become loose and the gums sore. Spots appear on the legs, and pain is felt in old wounds and bruises; later, from a slight oedema, the legs, and then the arms, swell to a great size and become blackened behind the joints. After this the patient is soon incapacitated, and the last horrible stages of the disease set in, from which death is a merciful release.

One of the most striking features of the disease is the disproportion between its severity and the simplicity of the cure. Today we know that scurvy is due solely to a deficiency in vitamin C, a compound essential to metabolism that the human body must obtain from food. Scurvy is rapidly and completely cured by restoring vitamin C into the diet.

Except for the nature of vitamin C, eighteenth century physicians knew this too. But in the second half of the nineteenth century, the cure for scurvy was lost. The story of how this happened is a striking demonstration of the problem of induction, and how progress in one field of study can lead to unintended steps backward in another.

An unfortunate series of accidents conspired with advances in technology to discredit the cure for scurvy. What had been a simple dietary deficiency became a subtle and unpredictable disease that could strike without warning. Over the course of fifty years, scurvy would return to torment not just Polar explorers, but thousands of infants born into wealthy European and American homes.


 The Sheer Terror of Syphilis (as seen in 1930s posters)
A new hypothesis from economist Andrew Francis argues that the terror of syphilis was so great among US residents that the sexual revolution of the 1960s simply wasn’t possible without getting the dreaded disease under control first. In his view, the development of effective treatments—most notably, penicillin—had a more profound effect on culture than even birth control measures.
This may be hard to grasp at first, since the fear of syphilis has fallen off so dramatically today. But there’s an easy way to transport yourself back in time 70 years or so, just before the rise of common antibiotics, to get a sense for life in a world where infectious diseases could prove so much more difficult to control. Thanks to the Work Projects Administration (WPA), a federal initiative in the late 1930s and early 1940s that put hundreds of thousands of American to work on public projects, we have an incredible visual archive of life at the time: 2,000 posters created by government-employed artists.
A surprising number of them relate to syphilis; indeed, it’s the largest public health issue addressed by the posters, many of which are now archived at the Library of Congress and available online. The posters are alternately terrifying, paternalistic, comforting, and informative, but they are never uninteresting.

The Sheer Terror of Syphilis (as seen in 1930s posters)

A new hypothesis from economist Andrew Francis argues that the terror of syphilis was so great among US residents that the sexual revolution of the 1960s simply wasn’t possible without getting the dreaded disease under control first. In his view, the development of effective treatments—most notably, penicillin—had a more profound effect on culture than even birth control measures.

This may be hard to grasp at first, since the fear of syphilis has fallen off so dramatically today. But there’s an easy way to transport yourself back in time 70 years or so, just before the rise of common antibiotics, to get a sense for life in a world where infectious diseases could prove so much more difficult to control. Thanks to the Work Projects Administration (WPA), a federal initiative in the late 1930s and early 1940s that put hundreds of thousands of American to work on public projects, we have an incredible visual archive of life at the time: 2,000 posters created by government-employed artists.

A surprising number of them relate to syphilis; indeed, it’s the largest public health issue addressed by the posters, many of which are now archived at the Library of Congress and available online. The posters are alternately terrifying, paternalistic, comforting, and informative, but they are never uninteresting.

Link: Lecture to Oxford Farming Conference about how the learning of science made Mark Lynas reconsider his stance on GM foods

I want to start with some apologies. For the record, here and upfront, I apologise for having spent several years ripping up GM crops. I am also sorry that I helped to start the anti-GM movement back in the mid 1990s, and that I thereby assisted in demonising an important technological option which can be used to benefit the environment.

As an environmentalist, and someone who believes that everyone in this world has a right to a healthy and nutritious diet of their choosing, I could not have chosen a more counter-productive path. I now regret it completely.

So I guess you’ll be wondering – what happened between 1995 and now that made me not only change my mind but come here and admit it? Well, the answer is fairly simple: I discovered science, and in the process I hope I became a better environmentalist.

When I first heard about Monsanto’s GM soya I knew exactly what I thought. Here was a big American corporation with a nasty track record, putting something new and experimental into our food without telling us. Mixing genes between species seemed to be about as unnatural as you can get – here was humankind acquiring too much technological power; something was bound to go horribly wrong. These genes would spread like some kind of living pollution. It was the stuff of nightmares.

These fears spread like wildfire, and within a few years GM was essentially banned in Europe, and our worries were exported by NGOs like Greenpeace and Friends of the Earth to Africa, India and the rest of Asia, where GM is still banned today. This was the most successful campaign I have ever been involved with.

This was also explicitly an anti-science movement. We employed a lot of imagery about scientists in their labs cackling demonically as they tinkered with the very building blocks of life. Hence the Frankenstein food tag – this absolutely was about deep-seated fears of scientific powers being used secretly for unnatural ends. What we didn’t realise at the time was that the real Frankenstein’s monster was not GM technology, but our reaction against it.

For me this anti-science environmentalism became increasingly inconsistent with my pro-science environmentalism with regard to climate change. I published my first book on global warming in 2004, and I was determined to make it scientifically credible rather than just a collection of anecdotes.

So I had to back up the story of my trip to Alaska with satellite data on sea ice, and I had to justify my pictures of disappearing glaciers in the Andes with long-term records of mass balance of mountain glaciers. That meant I had to learn how to read scientific papers, understand basic statistics and become literate in very different fields from oceanography to paleoclimate, none of which my degree in politics and modern history helped me with a great deal.

I found myself arguing constantly with people who I considered to be incorrigibly anti-science, because they wouldn’t listen to the climatologists and denied the scientific reality of climate change. So I lectured them about the value of peer-review, about the importance of scientific consensus and how the only facts that mattered were the ones published in the most distinguished scholarly journals.

My second climate book, Six Degrees, was so sciency that it even won the Royal Society science books prize, and climate scientists I had become friendly with would joke that I knew more about the subject than them. And yet, incredibly, at this time in 2008 I was still penning screeds in the Guardian attacking the science of GM – even though I had done no academic research on the topic, and had a pretty limited personal understanding. I don’t think I’d ever read a peer-reviewed paper on biotechnology or plant science even at this late stage.

Obviously this contradiction was untenable. What really threw me were some of the comments underneath my final anti-GM Guardian article. In particular one critic said to me: so you’re opposed to GM on the basis that it is marketed by big corporations. Are you also opposed to the wheel because because it is marketed by the big auto companies?

So I did some reading. And I discovered that one by one my cherished beliefs about GM turned out to be little more than green urban myths.

I’d assumed that GM would increase the use of chemicals. It turned out that pest-resistant cotton and maize needed less insecticide.

I’d assumed that GM benefited only the big companies. It turned out that billions of dollars of benefits were accruing to farmers needing fewer inputs.

I’d assumed that Terminator Technology was robbing farmers of the right to save seed. It turned out that hybrids did that long ago, and that Terminator never happened.

I’d assumed that no-one wanted GM. Actually what happened was that Bt cotton was pirated into India and roundup ready soya into Brazil because farmers were so eager to use them.

I’d assumed that GM was dangerous. It turned out that it was safer and more precise than conventional breeding using mutagenesis for example; GM just moves a couple of genes, whereas conventional breeding mucks about with the entire genome in a trial and error way.

But what about mixing genes between unrelated species? The fish and the tomato? Turns out viruses do that all the time, as do plants and insects and even us – it’s called gene flow.

The problem with genetically modified foods is not really the genetic modification, it’s the corporate ownership of those modifications and the patents on life.

Link: Death at Yosemite: The Story Behind Last Summer's Hantavirus Outbreak

On December 10, Yosemite National Park began demolishing 91 tent cabins in Curry Village, a rustic encampment of 408 canvas-sided cabins jammed into a pine-and-cedar glade near the sloping shoulders of Half Dome. It was here that an outbreak of hantavirus began last summer, infecting at least 10 people and killing three.

But on Sunday, June 10, 2012, the campground seemed idyllic. That weekend held all the promise of early summer. The Curry Village swimming pool was open. The smell of hot dogs and nachos curled out of the snack bar. The sun bounced off the face of Glacier Point. Kids in “Go Climb a Rock” T-shirts shouted and chased each other on bikes.

Sometime that day, a 49-year-old woman from the Los Angeles area arrived at Curry Village’s front desk, a plain wood-floor office that’s often cacophonous with the sound of staffers checking guests in and out. A clerk handed her a key to one of the 91 “signature tent cabins” that opened three years ago—the “new 900s” as they were collectively known. Unlike the older cabins, which are sided with single-ply vinyl-coated canvas, the signature cabins boasted double-wall plywood construction and propane heaters, making them warmer and quieter than the older units.

Off she went, this Southern California lady, to enjoy her Yosemite vacation. We’ll call her Visitor One.

About the same time, another guest checked into Curry Village. He was a 36-year-old man from Alameda County, California, which encompasses Berkeley, Oakland, and the East Bay region. He was given the key to a cabin close to Visitor One’s. He dropped off his things and went about his business. We’ll call him Visitor Two.

We don’t know exactly how Visitors One and Two spent their four days in the park. Medical confidentiality laws forbid public-health officials from releasing their names, and they and their families have chosen to keep their stories private. Maybe they hiked to the top of Half Dome or enjoyed the giant sequoias of the Mariposa Grove. By the following Wednesday, June 13, both visitors had checked out of their Curry Village tent cabins and left the park.

Around Yosemite the summer unfolded quietly. The search-and-rescue team went out on minor events: an ankle fracture on the Panorama Trail, a fallen hiker on the Half Dome cable route. Rangers kept a wary eye on the Cascade Fire, a lightning-sparked wilderness blaze that smoldered through a red fir forest.

Then, in late June, Visitor One fell ill. She might have felt like she had the flu: chills, muscle aches, fever, headache, dizziness, fatigue. The flu goes away after a few days. This didn’t. We do know that, back home, she went to see her doctor. When presented with Visitor One’s symptoms, most physicians would have dismissed it as the flu or, at worse, low-level pneumonia. Her doctor didn’t. They talked about what she might have picked up and where. She mentioned her Yosemite trip. The doctor took the unusual step of calling Charles Mosher, a public-health officer for Mariposa County, which encompasses Yosemite, and asking if there were any known hantavirus cases in the area. “Based on her history and symptoms, [hantavirus] was a definite possibility,” Mosher recalled, so he and Visitor One’s doctor agreed that starting treatment for the virus while awaiting lab confirmation was the prudent way to go. 

That was, given the circumstance, about the worst thing Visitor One could hear.

Link: The Cold Hard Facts of Freezing to Death

When your Jeep spins lazily off the mountain road and slams backward into a snowbank, you don’t worry immediately about the cold. Your first thought is that you’ve just dented your bumper. Your second is that you’ve failed to bring a shovel. Your third is that you’ll be late for dinner. Friends are expecting you at their cabin around eight for a moonlight ski, a late dinner, a sauna. Nothing can keep you from that.

Driving out of town, defroster roaring, you barely noted the bank thermometer on the town square: minus 27 degrees at 6:36. The radio weather report warned of a deep mass of arctic air settling over the region. The man who took your money at the Conoco station shook his head at the register and said he wouldn’t be going anywhere tonight if he were you. You smiled. A little chill never hurt anybody with enough fleece and a good four-wheel-drive.

But now you’re stuck. Jamming the gearshift into low, you try to muscle out of the drift. The tires whine on ice-slicked snow as headlights dance on the curtain of frosted firs across the road. Shoving the lever back into park, you shoulder open the door and step from your heated capsule. Cold slaps your naked face, squeezes tears from your eyes.

You check your watch: 7:18. You consult your map: A thin, switchbacking line snakes up the mountain to the penciled square that marks the cabin.

Breath rolls from you in short frosted puffs. The Jeep lies cocked sideways in the snowbank like an empty turtle shell. You think of firelight and saunas and warm food and wine. You look again at the map. It’s maybe five or six miles more to that penciled square. You run that far every day before breakfast. You’ll just put on your skis. No problem.

There is no precise core temperature at which the human body perishes from cold. At Dachau’s cold-water immersion baths, Nazi doctors calculated death to arrive at around 77 degrees Fahrenheit. The lowest recorded core temperature in a surviving adult is 60.8 degrees. For a child it’s lower: In 1994, a two-year-old girl in Saskatchewan wandered out of her house into a minus-40 night. She was found near her doorstep the next morning, limbs frozen solid, her core temperature 57 degrees. She lived.

Others are less fortunate, even in much milder conditions. One of Europe’s worst weather disasters occurred during a 1964 competitive walk on a windy, rainy English moor; three of the racers died from hypothermia, though temperatures never fell below freezing and ranged as high as 45.

But for all scientists and statisticians now know of freezing and its physiology, no one can yet predict exactly how quickly and in whom hypothermia will strike—and whether it will kill when it does. The cold remains a mystery, more prone to fell men than women, more lethal to the thin and well muscled than to those with avoirdupois, and least forgiving to the arrogant and the unaware.

The process begins even before you leave the car, when you remove your gloves to squeeze a loose bail back into one of your ski bindings. The freezing metal bites your flesh. Your skin temperature drops.

Within a few seconds, the palms of your hands are a chilly, painful 60 degrees. Instinctively, the web of surface capillaries on your hands constrict, sending blood coursing away from your skin and deeper into your torso. Your body is allowing your fingers to chill in order to keep its vital organs warm.

You replace your gloves, noticing only that your fingers have numbed slightly. Then you kick boots into bindings and start up the road.

Were you a Norwegian fisherman or Inuit hunter, both of whom frequently work gloveless in the cold, your chilled hands would open their surface capillaries periodically to allow surges of warm blood to pass into them and maintain their flexibility. This phenomenon, known as the hunter’s response, can elevate a 35-degree skin temperature to 50 degrees within seven or eight minutes.

Other human adaptations to the cold are more mysterious. Tibetan Buddhist monks can raise the skin temperature of their hands and feet by 15 degrees through meditation. Australian aborigines, who once slept on the ground, unclothed, on near-freezing nights, would slip into a light hypothermic state, suppressing shivering until the rising sun rewarmed them.

You have no such defenses, having spent your days at a keyboard in a climate-controlled office. Only after about ten minutes of hard climbing, as your body temperature rises, does blood start seeping back into your fingers. Sweat trickles down your sternum and spine.

By now you’ve left the road and decided to shortcut up the forested mountainside to the road’s next switchback. Treading slowly through deep, soft snow as the full moon hefts over a spiny ridgetop, throwing silvery bands of moonlight and shadow, you think your friends were right: It’s a beautiful night for skiing—though you admit, feeling the minus-30 air bite at your face, it’s also cold.

After an hour, there’s still no sign of the switchback, and you’ve begun to worry. You pause to check the map. At this moment, your core temperature reaches its high: 100.8. Climbing in deep snow, you’ve generated nearly ten times as much body heat as you do when you are resting.