Sunshine Recorder

Link: From Miasma to Ebola: The History of Racist Moral Panic Over Disease

On October 1st, the New York Times published a photograph of a four-year-old girl in Sierra Leone. In the photograph, the anonymous little girl lies on a floor covered with urine and vomit, one arm tucked underneath her head, the other wrapped around her small stomach. Her eyes are glassy, returning the photographer’s gaze. The photograph is tightly focused on her figure, but in the background the viewer can make out crude vials to catch bodily fluids and an out-of-focus corpse awaiting disposal.

The photograph, by Samuel Aranda, accompanied a story headlined ”A Hospital From Hell, in a City Swamped by Ebola.” Within it, the Times reporter verbally re-paints this hellish landscape where four-year-olds lie “on the floor in urine, motionless, bleeding from her mouth, her eyes open.” Where she will probably die amidst “pools of patients’ bodily fluids,” “foul-smelling hospital wards,” “pools of infectious waste,” all overseen by an undertrained medical staff “wearing merely bluejeans” and “not wearing gloves.”

Aranda’s photograph is in stark contrast to the images of white Ebola patients that have emerged from the United States and Spain. In these images the patient, and their doctors, are almost completely hidden; wrapped in hazmat suits and shrouded from public view, their identities are protected. The suffering is invisible, as is the sense of stench produced by bodily fluids: these photographs are meant to reassure Westerners that sanitation will protect us, that contagion is contained.

Pernicious undertones lurk in these parallel representations of Ebola, metaphors that encode histories of nationalism and narratives of disease. African illness is represented as a suffering child, debased in its own disease-ridden waste; like the continent, it is infantile, dirty and primitive. Yet when the same disease is graphed onto the bodies of Americans and Europeans, it morphs into a heroic narrative: one of bold doctors and priests struck down, of experimental serums, of hazmat suits and the mastery of modern technology over contaminating, foreign disease. These parallel representations work on a series of simple, historic dualisms: black and white, good and evil, clean and unclean.

The Western medical discourse on Africa has never been particularly subtle: the continent is often depicted as an undivided repository of degeneration. Comparing the representations of disease in Africa and in the West, you can hear the whispers of an underlying moral panic: a sense that Africa, and its bodies, are uncontainable. The discussion around Ebola has already evoked—almost entirely from Tea Party Republicans—the explicit idea that American borders are too porous and that all manners of perceived primitiveness might infect the West.

And indeed, with the history of American and European panic over regulating foreign disease comes a history of regulating the perception of filth from beyond our borders, a history of policing non-white bodies that have signified some unclean toxicity.

+++

If the history of modernity can, as Dominique Laporte suggests in his genealogical meditation History of Shit, be written as a triumph of cleanliness over bodily refuse, then so too could the European colonization of Africa and India. The sanitary crusade of the nineteenth century is central to the violent project of empire. Western medicine, with its emphasis on personal hygiene, functioned (and in some arenas still functions) as colonialism’s benevolent cover—an acknowledgment that, while empire was about profit at all costs, that it could also conceal this motive slightly by concerning itself with bettering the health of debased bodies.

The bureaucratic annals of colonialism are filled with reports on the unsanitary conditions of life and unhygienic practices of natives. Dr. Thomas R. Marshall, an American in the Philippines, wrote of the “promiscuous defecation” of the “Filipino people.” An 1882 British report, “Indian Habits,” observed that, “The people of India seem to be very much the condition of children. They must be made clean by compulsion until they arrive at that degree of moral education when dirt shall become hateful to them, and then they will keep themselves clean for their own sakes.” Dirtiness and defecation indicated their primitiveness and savagery; it reaffirmed the white body’s privileged position and claim to moral and medical modernity.

This intense focus on hygiene emerged from an old medical doctrine known as miasma. According to the miasma theory, illness was the direct result of the polluting emanations of filth: sewer gas, garbage fumes and stenches that permeated air and water, creating disease in the process. Filth, however, had many incarnations. It could be literal, or also a catch-all metaphorical designation for anything that made people uncomfortable about race, gender and sexuality. (This idea underpins phrases still in use today, for example: a “dirty whore”). 

So, the medical mission of hygiene was simultaneously a moral and medical imperative. And it was this fervent belief in miasmas that led to colonial administrations deeply interested in the bodily fluids of bodies of color; as Lord Wellesley, the British governor of India, briefly noted in an 1803 report, “Indians defecate everywhere.”

But if colonial governments exercised concern over what they believed to be the contaminated cultures of native populations, it was more likely the result of panic over the health of their own officials and soldiers. “The white man’s grave,” as one nineteenth-century British colonist called Sierra Leone, was a dangerous trap of foreign disease, carried by the contagious peoples who inhabited valuable land. Their culture, like their natural resources, must be conquered. Who better to do that then scientifically advanced westerners who valued cleanliness and life?

Miasma theory proved a powerful science through which to construct “the African” or “the Indian.” Long after its late-nineteenth-century demise and subsequent replacement with an epidemiological understanding of contagion, the metaphors it produced endured. The move from miasma theories to germ theories simply added pathological depth to older social resentments. Minorities might look clean: but who knew what invisible, contagious threats lurked within?

These stereotypes showed up everywhere. Take, for example, Victorian soap advertisements: ordinary markers of domesticity that, according to feminist scholar Anne McClintock, “persuasively mediated the Victorian poetics of racial hygiene and imperial progress.” In a Pears’ Soap advertisement from around 1882, race is linked to dirtiness and ignorance: blacks could become clean (here, actually, white) if they just bathed; they barely know how to clean themselves; they need a white man to teach them cleanliness, civilization, culture, etc.

Such metaphors proved successful for Pears’ Soap, and the company returned to triumphant colonial imagery time and time again. In another ad from the 1890s, a naval commander (likely a stand-in for Admiral George Dewey) is shown washing his hands; he is flanked by ships that import soap and a missionary anointing his dirty, savage subject with hygiene. Underneath the images, the text reads “The first step toward lightening The White Man’s Burden is through teaching the virtues of cleanliness.”

But if disease was the result of a certain kind of self-imposed debasement—of a choice to resist modernity—then the moral meaning of dirt was flexible. It could be external, a black body in need of hygienic instruction, but it could also be internal, the naturally occurring state of the nonwestern body. Miasma theory gave way to eugenics; filth was biologically determined, inside and out. 

+++

If filth provided European imperialism with a set of legible metaphors about disease and race, then it also gave a newly-forming United States racial principles on which to build a national identity. With institutionalized slavery and a relatively open immigration policy, America, more so than Europe, needed those metaphors to preserve the cultural and moral superiority of particular kind of whiteness (a Teutonic Northern European whiteness). In the late-nineteenth and early-twentieth centuries contagious disease was associated with new immigrant groups who were perceived as harbingers of death.

Nativist groups warned the public of disease that would infect the nation’s growing urban areas, rationalizing their prejudice with arguments about public health. In the 1830s, poor Irish were said to bring cholera; at the turn of the century, tuberculosis was dubbed the “tailors’ disease” and associated with the Jewish population; Italians for decades were seen as bearers of polio.

To protect against immigrant germs, the United States passed the Immigration Act of 1891, an act that excluded those with “criminal records, polygamist, and prostitutes,” as well as those with “loathsome or contagious disease.” The Immigration Act made clear that the immigrant carried the filth of both moral degradation and disease. The definition of “loathsome and contagious disease” was flexible and ever-changing, including everything from transmissible disease to insanity, senility, varicose veins, and poor eyesight.

The truth, of course, was that immigrants groups were as healthy as acceptably white Americans. According to contemporary legal scholars, less than three percent of the total number of immigrants seeking entry were rejected for medical reasons; the vast majority of those excluded were Chinese who, unlike their white counterparts, could be rejected for ringworm and “the appearance of mongolism.” But yet, despite these facts, white Americans still clamored to close the borders entirely. An 1888 federal report calling for even more immigration restriction warned of the “sewage of vice and crime and physical weakness” that washed ashore from Europe and the “nameless abominations” coming from Asia.

The language of the 1888 report is similar to the current, persistent calls to close our borders. Both rely on the intertwining metaphors of illness and filth. This, of course, is how the metaphor of disease works. Susan Sontag has written of the hierarchy of disease, of death in a western context. She notes that there are brave and beautiful ways to die, diseases that afflict and kill, yet reveal a beautiful, meaningful self. And the diseased body of white Europeans, particularly the wealthy, is rarely depicted filthy or debased. Rather, that body becomes the source of poetic meaning; think of Henry James’s The Wings of the Dove (1902) or Thomas Mann’s The Magic Mountain (1927) a novel that describes disease as “nothing but a disguised manifestation of power of love; and all disease is only love transformed.”

Perhaps nowhere was the power of a disease metaphor more evident than in San Francisco in 1900. When a Chinese immigrant was found dead in the basement of a Chinatown hotel, rumors spread immediately that he had died of the plague. Before the diagnosis was confirmed, the mayor quarantined Chinatown, preventing anyone of Asian origin from leaving the district while allowing whites to move around freely. Once the diagnosis was confirmed, the city’s white residents panicked. The Board of Health demanded the entire district be doused in lye and bichloride of mercury, that clothes and furniture be burned, that every person of Asian descent be vaccinated with something called Haffkine’s serum, which had not yet been approved for human use. Newspapers called for Chinatown to be emptied of its residents and burned to the ground.

The measure the city took were in direct opposition to those recommended by health officials. Quarantining an entire population and dousing them with household cleaners made the outbreak worse. The outbreak of plague ended when state health officials stepped in, but the panic served its purpose: it reaffirmed old suspicions that Chinese immigrants were dangerous, and that their foreign lifestyle could easily soil San Francisco’s urban modernity.

The plague broke on again in San Francisco, after the 1906 earthquake, but this time it wasn’t in Chinatown; it emerged from an overwhelmingly white city district. There was no quarantine, no dousing of the city with lye. Rather, the city spent two million dollars to provide free and sophisticated medical care, and the death toll was much smaller.

+++

Returning to the four-year old Ebola patient in the New York Times: The photograph of a dying little girl is simultaneously a site of sympathy and a powerful reminder of the contagion she carries. As she dies from a vicious, seemingly unstoppable disease, the photograph is meant simultaneously to warn and console. Look at this cesspool, these unsanitary conditions; she is foreign, Ebola is foreign. This scene is by no measure the only scene of healthcare in African countries, but it is often the only scene we see. 

This underlying sense of foreignness underlies the way we see Ebola in the United States. Newsweek repeated these ugly stereotypes in an August cover story that conspiratorially suggested that illegally imported African “bushmeat” could carry Ebola into our borders (a story that’s been thoroughly debunked).

And when the Centers for Disease Control and Prevention confirmed that Ebola had crossed into the United States, carried on the body of a black man from Liberia, the threat of infection was suddenly perceived as quite real. Rumors flew that Ebola had reached Idaho, then Miami, that the disease was airborne, and that a sick passenger on a flight to New Jersey was infected. Television and radio host quizzed CDC physicians about the likelihood of infection (“little to none,” they seem to answer again and again), asked again about the safety of our too-porous borders, demanding that our government outline a plan of action to ward off infectious, black bodies.

Thomas Duncan, that Ebola patient, is now dead. He sought treatment five days after returning to the US, telling the hospital that he had traveled to Liberia and was experiencing a fever and abdominal pain. With no insurance, he was turned away. A Washington Post story reports that some people close to him feel that he was “not properly treated because he was not American.”

"He is a Liberian man," Massa Lloyd, a close friend of Troh, said Wednesday. "The family feels he wasn’t getting the right treatment because he was an African man. They feel America is fighting only for the white man, not the black man."

The onset of epidemic disease has always incited prejudice, permitting the stereotyping of foreigners, of people of color, as inherently closer to disease: more deserving of death from it. The “always them, never us” conception of Ebola is a major factor in the lack of a vaccine, which the NIH has been researching, with dwindling funding, since 2001. The new presence of Ebola in the US may change the way we approach the virus, but maybe not; maybe it will always seem too foreign for us to handle it like the global and communal problem that it is; maybe, disastrously, this outbreak of Ebola will be no different than before.

Link: Cycling as an Eschatological Activity

I’ve been cycling a lot lately, the spandex, sunglasses and shaved-legs kind, yes, but also the get around town kind. To the coffee shop, to the store, to school—if I’m going someplace by myself I do my best to get there by bike.

One particular stretch I ride regularly has newly striped bike lanes—lanes that didn’t come without protest from a handful of residents on the busy street. Essentially, the question came down to whether streets are for cars and for bikes or just for cars.  The residents of the street thought that the street and its wide shoulder should be for the driving and parking of cars.  The many bike commuters who follow that street to get to the metro and Old Town Alexandria thought that the street should be shared by both.  The city sided with the cyclists and my rides are a little less harrowing as a result.  The conflict, however, raised a theological point.

The question of whether roads are for cars or for bikes or for both reminds me of St. Augustine’s City of God.  It’s a massive book, but at its core is the idea that there are two overlapping cities—the City of God and the City of Man.  The City of God is a city founded on peace and whose end is peace.  It is oriented toward the final coming of God’s kingdom.  The City of Man is a city that was founded on violence and is animated by pride, power, and greed—what peace it has is based on violence.  The residents of both cities interact in commerce, in space, etc., but at the end of the day they are working toward different ends.  Only one of those cities really has a future.

What is at play on the streets, with bikes and cars and buses, are essentially two cities, two different realities with differing values.  Sometimes the two overlap, but at the end of the day, the cyclists and the drivers are using the roads toward different ends.  Of course many people, like myself, use the roads in both modes.  I drive and I bike, but it wouldn’t take me long to choose if I could only have one.  In fact the only reason I keep driving my car in many instances is because of cars—if I could safely ride with my two year old on the main streets of the city I would do it.

When I drive a car I am participating in a fallen reality—the oil economy, the speed economy, the death economy.  It is the car that has made the suburb possible; it is the car that is responsible for over 30,000 deaths in the U.S. each year—the cost of velocity more than anything else.  Transportation—cars, buses, trucks—contributes 30% of the total carbon emissions for the U.S. each year.  I cannot imagine a place for cars in the coming Kingdom of God.

Bikes, however, are deeply sustainable.  We could go on riding them forever.  They can go fast, yes, but fast on a bike goes barely above a school zone speed limit.  They are healthy for both our bodies and the earth.  I hope to be riding bikes now and forever, even in the coming Kingdom of God.  When I ride my bike, even on the hard days of heat or cold, even on the days when I have to pull out my rain gear—I am doing so as an eschatological act.  I am living into the City of God—its values, its ends.

Theologian Stanley Hauerwas has often reminded us that the reason he is a pacifist isn’t because he thinks it will work better than war to bring the world peace or relieve the suffering of innocents.  He is a pacifist because he believes the call of Christ does not allow him to be otherwise.  To be a pacifist now is to perform an eschatological act—it is a commitment to live into the kingdom that is coming rather than the kingdom that is fading away.

When I bike I am living into something more hopeful and joyful, slower and more human than the world of cars and oil and traffic.  It is a small act of embrace of the world as it should be and will be.  With each pedal stroke I am getting my legs ready for the streets of the Kingdom that is breaking into the world.

(Source: gospelofthekingdom, via itsthom)

Link: The Cultural History of Pain

Speculation about the degree to which human beings and animals experienced pain has a long history.

On 16 April 1872, a woman signing herself “An Earnest Eng­lishwoman” published a letter in the Times. It was entitled “Are Women Animals?”.

She was clearly very angry. Her fury had been fuelled by recent court cases in which a man who had “coolly knocked out” the eye of his mistress and another man who had killed his wife were imprisoned for just a few months each. In contrast, a man who had stolen a watch was punished severely, sentenced to not only seven years’ penal servitude, but also 40 lashes of the “cat”. She noted that although some people might believe that a watch was an “object of greater value than the eye of a mistress or the life of a wife”, she was asking readers to remember that “the inanimate watch does not suffer”. It must cause acute agony for any “living creature, endowed with nerves and muscles, to be blinded or crushed to death”.

Indeed, she continued, she had “read of heavier sentences being inflicted for cruelty towards that – may I venture to say? – lower creation”. She pleaded for women to be subsumed under legislation forbidding cruelty to animals, because that would improve their position in law.

Speculation about the degree to which human beings and animals experienced pain has a long history, but “An Earnest Englishwoman” was writing at a very important time in these debates. Charles Darwin’s Descent of Man had been published the year before her letter, and his Expression of the Emotions in Man and Animals appeared in 1872. Both Darwin and “An Earnest Englishwoman” were addressing a central question that had intrigued theologians, scientists, philosophers, psychologists and other social commentators for centuries: how can we know how other people feel?

The reason this question was so important was that many people didn’t believe that all human beings (let alone non-human animals) were equally capable of suffering. Scientists and philosophers pointed to the existence of a hierarchy of sentience. Belief in a great “Chain of Being”, according to which everything in the universe was ranked from the highest to the lowest, is a fundamental tenet of western philosophy. One aspect of this Chain of Being involved the perception of sensation. There was a parallel great Chain of Feeling, which placed male Europeans at one end and slaves and animals at the other.

Of course, “An Earnest Englishwoman” was using satire to argue for greater rights for women. She was not accusing men of failing to acknowledge that women were capable of experiencing pain. Indeed, that much-maligned group of Victorian women – hysterics – was believed to be exquisitely sensitive to noxious stimuli. Rather, she was drawing attention to the way a lack of respect for the suffering of some people had a profound impact on their status in society. If the suffering of women were treated as seriously as the suffering of animals, she insisted, women’s lives would be better.

Although she does not discuss it in her short letter, the relationship between social status and perceptions of sentience was much more fraught for other groups within British and American societies. In particular, people who had been placed at the “lower” end of the Chain of Feeling paid an extremely high price for prejudices about their “inability” to feel. In many white middle-class and upper-class circles, slaves and “savages”, for instance, were routinely depicted as possessing a limited capacity to experience pain, a biological “fact” that conveniently diminished any culpability among their so-called superiors for acts of abuse inflicted on them. Although the author of Practical Rules for the Management and Medical Treatment of Negro Slaves, in the Sugar Colonies (1811) conceded that “the knife of the anatomist … has never been able to detect” anatomical differences between slaves and their white masters, he nevertheless contended that slaves were better “able to endure, with few expressions of pain, the accidents of nature”. This was providential indeed, given that they were subjected to so many “accidents of nature” while labouring on sugar-cane plantations.

Such beliefs were an important factor in imperial conquests. With voyeuristic curiosity, travellers and explorers often commented on what they regarded as exotic responses to pain by indigenous peoples. In Australia, newly arrived colonisers breathlessly maintained that Native Australians’ “endurance of pain” was “something marvellous”. Others used the theme as an excuse for mockery. For instance, the ability of New Zealand Maoris to bear pain was ascribed to their “vanity”. They were said to be so enamoured with European shoes that “when one of them was happy enough to become the possessor of a pair, and found that they were too small, he would not hesitate to chop off a toe or two, stanch the bleeding by covering the stump with a little hemp, and then force the feet [sic] into the boots”.

But what was it about the non-European body that allegedly rendered it less suscep­tible to painful stimuli? Racial sciences placed great emphasis on the development and complexity of the brain and nerves. As the author of Pain and Sympathy (1907) concluded, attempting to explain why the “savage” could “bear physical torture without shrinking”: the “higher the life, the keener is the sense of pain”.

There was also speculation that the civilising process itself had rendered European peoples more sensitive to pain. The cele­brated American neurologist Silas Weir Mitchell stated in 1892 that in the “process of being civilised we have won … intensified capacity to suffer”. After all, “the savage does not feel pain as we do: nor as we examine the descending scale of life do animals seem to have the acuteness of pain-sense at which we have arrived”.

Some speculated whether the availability of anaesthetics and analgesics had an effect on people’s ability (as well as willingness) to cope with acute affliction. Writing in the 1930s, the distinguished pain surgeon René Leriche argued fervently that Europeans had become more sensitive to pain. Unlike earlier in the century, he claimed, modern patients “would not have allowed us to cut even a centimetre … without administering an anaesthetic”. This was not due to any decline of moral fibre, Leriche added: rather, it was a sign of a “nervous system differently developed, and more sensitive”.

Other physicians and scientists of the 19th and early 20th centuries wanted to complicate the picture by making a distinction between pain perception and pain reaction. But this distinction was used to denigrate “outsider” groups even further. Their alleged insensitivity to pain was proof of their humble status – yet when they did exhibit pain reactions, their sensitivity was called “exaggerated” or “hysterical” and therefore seen as more evidence of their inferiority. Such confused judgements surfaced even in clinical literature that purported to repudiate value judgements. For instance, John Finney was the first president of the American College of Surgeons. In his influential book The Significance and Effect of Pain (1914), he amiably claimed:

It does not always follow that because a patient bears what appears to be a great amount of pain with remarkable fortitude, that that individual is more deserving of credit or shows greater self-control than the one who does not; for it is a well-established fact that pain is not felt to the same degree by all individuals alike.

However, in the very same section, Finney made pejorative statements about people with a low pain threshold (they possessed a “yellow streak”, he said) and insisted that patients capable of bearing pain showed “wonderful fortitude”.

In other words, civilised, white, professional men might be exquisitely sensitive to pain but, through acts of willpower, they were capable of masking their reaction. In contrast, Finney said, the dark-skinned and the uneducated might bear “a great amount of pain with remarkable fortitude” but they did not necessarily deserve credit for it.

It was acknowledged that feeling pain was influenced by emotional and psychological states. The influence of “mental factors” on the perception of pain had been observed for centuries, especially in the context of religious torture. Agitation, ecstasy and ideological fervour were known to diminish (or even eliminate) suffering.

This peculiar aspect of pain had been explored most thoroughly in war. Military lore held that the “high excitement” of combat lessened the pain of being wounded. Even Lucretius described how when

the scythed chariots, reeking with indiscriminate slaughter, suddenly chop off the limbs … such is the quickness of the injury and the eagerness of the man’s mind that he cannot feel the pain; and because his mind is given over to the zest of battle, maimed though he be, he plunges afresh into the fray and the slaughter.

Time and again, military observers have noted how, in the heat of battle, wounded men might not feel even severe wounds. These anecdotal observations were confirmed by a systematic study carried out during the Second World War. The American physician Henry K Beecher served in combat zones on the Venafro and Cassino fronts in Italy. He was struck by how there was no necessary correlation between the seriousness of any specific wound and the men’s expressions of suffering: perhaps, he concluded, the strong emotions aroused in combat were responsible for the absence of acute pain – or the pain might also be alleviated by the knowledge that wartime wounding would release a soldier from an exceedingly dangerous environment.

Beecher’s findings were profoundly influential. As the pain researchers Harold Wolff and Stewart Wolf found in the 1950s, most people perceived pain at roughly similar intensities, but their threshold for reaction varied widely: it “depends in part upon what the sensation means to the individual in the light of his past experiences”.

Away from the battlefield, debates about the relative sensitivity of various people were not merely academic. The seriousness of suffering was calibrated according to such characterisations. Sympathy was rationed unevenly.

Myths about the lower susceptibility of certain patients to painful stimuli justified physicians prescribing fewer and less effective analgesics and anaesthetics. This was demonstrated by the historian Martin Pernick in his work on mid-19th-century hospitals. In A Calculus of Suffering (1985), Pernick showed that one-third of all major limb amputations at the Pennsylvania Hospital between 1853 and 1862 had been done without any anaesthetic, even though it was available. Distinguished surgeons such as Frank Hamilton carried out more than one-sixth of all non-military amputations on fully conscious patients.

This is not simply peculiar to earlier centuries. For instance, the belief that infants were not especially liable to experiencing pain (or that indications of suffering were merely reflexes) was prominent for much of the 20th century and had profound effects on their treatment. Painful procedures were routinely carried out with little, if any, anaesthetic or analgesic. Max Thorek, the author of Modern Surgical Technique (1938), claimed that “often no anaesthetic is required”, when operating on young infants: indeed, “a sucker consisting of a sponge dipped in some sugar water will often suffice to calm the baby”.

As “An Earnest Englishwoman” recognised, beliefs about sentience were linked to ideas of who was considered fully human. Slaves, minority groups, the poor and others in society could also be dispossessed politically, economically and socially on the grounds that they did not feel as much as others. The “Earnest Englishwoman’s” appeal – which drew from a tradition of respect and consideration that lays emphasis on the capacity to suffer – is one that has been echoed by the oppressed and their supporters throughout the centuries.

Link: Imagining the Post-Antibiotics Future

After 85 years, antibiotics are growing impotent. So what will medicine, agriculture and everyday life look like if we lose these drugs entirely?

Predictions that we might sacrifice the antibiotic miracle have been around almost as long as the drugs themselves. Penicillin was first discovered in 1928 and battlefield casualties got the first non-experimental doses in 1943, quickly saving soldiers who had been close to death. But just two years later, the drug’s discoverer Sir Alexander Fleming warned that its benefit might not last. Accepting the 1945 Nobel Prize in Medicine, he said:

“It is not difficult to make microbes resistant to penicillin in the laboratory by exposing them to concentrations not sufficient to kill them… There is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drug make them resistant.”

As a biologist, Fleming knew that evolution was inevitable: sooner or later, bacteria would develop defenses against the compounds the nascent pharmaceutical industry was aiming at them. But what worried him was the possibility that misuse would speed the process up. Every inappropriate prescription and insufficient dose given in medicine would kill weak bacteria but let the strong survive. (As would the micro-dose “growth promoters” given in agriculture, which were invented a few years after Fleming spoke.) Bacteria can produce another generation in as little as twenty minutes; with tens of thousands of generations a year working out survival strategies, the organisms would soon overwhelm the potent new drugs.

Fleming’s prediction was correct. Penicillin-resistant staph emerged in 1940, while the drug was still being given to only a few patients. Tetracycline was introduced in 1950, and tetracycline-resistant Shigella emerged in 1959; erythromycin came on the market in 1953, and erythromycin-resistant strep appeared in 1968. As antibiotics became more affordable and their use increased, bacteria developed defenses more quickly. Methicillin arrived in 1960 and methicillin resistance in 1962; levofloxacin in 1996 and the first resistant cases the same year; linezolid in 2000 and resistance to it in 2001; daptomycin in 2003 and the first signs of resistance in 2004.With antibiotics losing usefulness so quickly — and thus not making back the estimated $1 billion per drug it costs to create them — the pharmaceutical industry lost enthusiasm for making more. In 2004, there were only five new antibiotics in development, compared to more than 500 chronic-disease drugs for which resistance is not an issue — and which, unlike antibiotics, are taken for years, not days. Since then, resistant bugs have grown more numerous and by sharing DNA with each other, have become even tougher to treat with the few drugs that remain. In 2009, and again this year, researchers in Europe and the United States sounded the alarm over an ominous form of resistance known as CRE, for which only one antibiotic still works.

Health authorities have struggled to convince the public that this is a crisis. In September, Dr. Thomas Frieden, the director of the U.S. Centers for Disease Control and Prevention, issued a blunt warning: “If we’re not careful, we will soon be in a post-antibiotic era. For some patients and some microbes, we are already there.” The chief medical officer of the United Kingdom, Dame Sally Davies — who calls antibiotic resistance as serious a threat as terrorism — recently published a book in which she imagines what might come next. She sketches a world where infection is so dangerous that anyone with even minor symptoms would be locked in confinement until they recover or die. It is a dark vision, meant to disturb. But it may actually underplay what the loss of antibiotics would mean.

In 2009, three New York physicians cared for a sixty-seven-year-old man who had major surgery and then picked up a hospital infection that was “pan-resistant” — that is, responsive to no antibiotics at all. He died fourteen days later. When his doctors related his case in a medical journal months afterward, they still sounded stunned. “It is a rarity for a physician in the developed world to have a patient die of an overwhelming infection for which there are no therapeutic options,” they said, calling the man’s death “the first instance in our clinical experience in which we had no effective treatment to offer.”

They are not the only doctors to endure that lack of options. Dr. Brad Spellberg of UCLA’s David Geffen School of Medicine became so enraged by the ineffectiveness of antibiotics that he wrote a book about it.

“Sitting with a family, trying to explain that you have nothing left to treat their dying relative — that leaves an indelible mark on you,” he says. “This is not cancer; it’s infectious disease, treatable for decades.”

As grim as they are, in-hospital deaths from resistant infections are easy to rationalize: perhaps these people were just old, already ill, different somehow from the rest of us. But deaths like this are changing medicine. To protect their own facilities, hospitals already flag incoming patients who might carry untreatable bacteria. Most of those patients come from nursing homes and “long-term acute care” (an intensive-care alternative where someone who needs a ventilator for weeks or months might stay). So many patients in those institutions carry highly resistant bacteria that hospital workers isolate them when they arrive, and fret about the danger they pose to others. As infections become yet more dangerous, the healthcare industry will be even less willing to take such risks.

Those calculations of risk extend far beyond admitting possibly contaminated patients from a nursing home. Without the protection offered by antibiotics, entire categories of medical practice would be rethought.

Many treatments require suppressing the immune system, to help destroy cancer or to keep a transplanted organ viable. That suppression makes people unusually vulnerable to infection. Antibiotics reduce the threat; without them, chemotherapy or radiation treatment would be as dangerous as the cancers they seek to cure. Dr. Michael Bell, who leads an infection-prevention division at the CDC, told me: “We deal with that risk now by loading people up with broad-spectrum antibiotics, sometimes for weeks at a stretch. But if you can’t do that, the decision to treat somebody takes on a different ethical tone. Similarly with transplantation. And severe burns are hugely susceptible to infection. Burn units would have a very, very difficult task keeping people alive.”

Doctors routinely perform procedures that carry an extraordinary infection risk unless antibiotics are used. Chief among them: any treatment that requires the construction of portals into the bloodstream and gives bacteria a direct route to the heart or brain. That rules out intensive-care medicine, with its ventilators, catheters, and ports—but also something as prosaic as kidney dialysis, which mechanically filters the blood.

Next to go: surgery, especially on sites that harbor large populations of bacteria such as the intestines and the urinary tract. Those bacteria are benign in their regular homes in the body, but introduce them into the blood, as surgery can, and infections are practically guaranteed. And then implantable devices, because bacteria can form sticky films of infection on the devices’ surfaces that can be broken down only by antibiotics

Dr. Donald Fry, a member of the American College of Surgeons who finished medical school in 1972, says: “In my professional life, it has been breathtaking to watch what can be done with synthetic prosthetic materials: joints, vessels, heart valves. But in these operations, infection is a catastrophe.” British health economists with similar concerns recently calculated the costs of antibiotic resistance. To examine how it would affect surgery, they picked hip replacements, a common procedure in once-athletic Baby Boomers. They estimated that without antibiotics, one out of every six recipients of new hip joints would die.

Antibiotics are administered prophylactically before operations as major as open-heart surgery and as routine as Caesarean sections and prostate biopsies. Without the drugs, the risks posed by those operations, and the likelihood that physicians would perform them, will change.

“In our current malpractice environment, is a doctor going to want to do a bone marrow transplant, knowing there’s a very high rate of infection that you won’t be able to treat?” asks Dr. Louis Rice, chair of the department of medicine at Brown University’s medical school. “Plus, right now healthcare is a reasonably free-market, fee-for-service system; people are interested in doing procedures because they make money. But five or ten years from now, we’ll probably be in an environment where we get a flat sum of money to take care of patients. And we may decide that some of these procedures aren’t worth the risk.”

Link: The AIDS Granny In Exile

In the ’90s, a gynecologist named Gao Yaojie exposed the horrifying cause of an AIDS epidemic in rural China — and the ensuing cover-up — and became an enemy of the state. Now 85, she lives in New York without her family, without her friends, and without regrets.

The enormous brick fortress in West Harlem was built in the mid-1970s as a visionary housing project, a new model for an affordable, self-contained urban community. Today, on a balmy September afternoon, it is a low-income housing compound lined with security cameras, guards, and triple-locked doors. A few drunks shouting at nobody in particular linger outside. Pound for pound, though, the most dangerous person living here may just be a diminutive 85-year-old Chinese grandmother dressed in a stylish purple sweater set with black leopard spots sent by her daughter in Canada.

This is not a slum. Neither is it where you would expect to find an internationally known human-rights warrior living out her golden years. In her one-bedroom apartment, Dr. Gao Yaojie — known to many as “the AIDS Granny” — moves with great difficulty through her tidy clutter and stacks of belongings. In the small kitchen, she stirs a pot of rice and bean porridge, one of the few things she can digest. She lost most of her stomach in surgery after a suicide attempt four decades ago and suffered multiple beatings during the Cultural Revolution.

A large bed where Gao’s live-in caretaker sleeps overwhelms the living room. In Gao’s bedroom, two twin beds are piled with stacks of books, photos and quilts. Her desk is heaped with papers, medications, and yet more books. Gao’s computer is always on, often clutched to her chest as she lies working in bed.

“I left China with one thing in each hand,” Gao says to me in Chinese. “A blood-pressure cuff to monitor my high blood pressure and a USB stick with more than a thousand pictures of AIDS victims.”

Before she agreed to meet me at all, she set rules via email: There would be no discussion of China’s politics, the Communist Party’s future, or the myriad issues that concern other dissidents. These are inexorably tied to her own life, but Gao does not want to be known as a multipurpose Chinese dissident. A lifetime of looking over her shoulder for danger has left her wary. She never learned English.

“I seldom see anyone,” she says. “Many people from China are very complicated. I don’t know what kind of intentions they have. I see them as cheating to get food, drinks, and money. They don’t really do any meaningful work.”

Gao believes she is watched here, just as she was in China for so many years. Given China’s well-documented pattern of stifling critical voices abroad, it’s impossible to rule out that someone is monitoring or harassing her, even in Harlem.

Money is tight. She had a fellowship through Columbia University for her first year in the U.S. Now she gets by on private donations that cover roughly $35,000 a year in expenses, the largest of those being her rent at Riverside. She has a few teeth left and can’t afford dental work.

She spends her days in bed, sleeping, writing, researching online, and obsessively analyzing what she witnessed in China in a lifetime that bridged tremendous tumult. For hours, she clicks away on her keyboard, emailing contacts back home for information and putting final touches on her newest book. She learned to use a computer at age 69.

This will be Gao’s 27th book and the ninth to chronicle China’s AIDS epidemic, a public health catastrophe that decimated entire villages and put her on the government’s enemy list. “You wouldn’t understand the earlier books, they were too technical,” she says, flashing a near-toothless grin.

“Although I am by myself, appearing to be lonely, I am actually very busy,” she says. “I am turning 86 soon and will be gone, but I will leave these things to the future generations.”

Her unplanned journey from Henan province to Harlem began 17 years ago, six months after she retired as a gynecologist and professor at the Henan Chinese Medicine University hospital in Zhengzhou. She went from being a retired grandmother to China’s first and most famous AIDS activist, and became such a thorn in the side of the regime that she eventually fled to New York for safety, away from her family and everyone she knows.

She turns to her computer and pulls up a photo of a gravely ill woman with an incision up her abdomen. Gao did not set out to become a dissident.

“I didn’t do this because I wanted to become involved in politics,” she says. “I just saw that the AIDS patients were so miserable. They were so miserable.”

In April 1996, Gao, then 69, was called from retirement to consult on a difficult case. A 42-year-old woman, Ms. Ba, had had ovarian surgery and was not getting better: Her stomach was bloated, she had a high fever and strange lesions on her skin. She grew sicker and her doctors were stumped. After finding no routine infection or illness, Gao demanded an AIDS test for the young mother.

Gao knew from her work that AIDS had entered Henan, the heartland Chinese province. Yet her colleagues scoffed: How could a simple farmer have AIDS? China had only a handful of confirmed cases. The government said AIDS was a disease of foreigners, spread through illicit drugs and promiscuous sex.

Gao insisted on a test. The results came back; Ms. Ba had AIDS. Her husband and children tested negative, which puzzled the doctors further. The patient was not a drug addict nor a prostitute, so Gao began to investigate. She determined the source was a government blood bank — Ms. Ba’s post-surgical blood transfusion infected her with HIV. “I realized the seriousness of the problem,” Gao later wrote. “If the blood in the blood bank carried the AIDS virus, then these victims would not be a small number.”

With no treatment available, Ms. Ba died within two weeks. Her husband, Gao remembers, spread a cot on the ground in front of her tomb and slept there for weeks in mourning.

Witnessing his grief launched Gao on a relentless campaign. She began investigating AIDS in Zhengzhou and nearby villages, conducting blood tests, compiling data, and trying to educate farmers about the risks carried by blood donations and transfusions.

Over months and years, her research into the epidemic took her across much of rural China. What she found astounded her: villages with infection rates of 20, 30, 40% or more; whole communities of AIDS orphans, zero treatment options, and little awareness of what was sickening and killing a generation of farmers. Worse, the population did not know how the disease spread. The numbers of how many were infected and died remain secret, the officially released data almost universally believed to be far too low.

Gao had finally found the cause. “Even now, the government is lying, saying AIDS was transmitted because of drug use,” she says. “The government officials were very good at lying.”

The breadbasket of China, Henan is cut by the Yellow River and its seasonal, devastating floods. Through generations of extreme poverty, it developed a reputation as a place where people lie, cheat, and steal. In reality, rural Henan is not unlike Middle America, with its sweeping, open pastures, peaceful landscapes, and hardworking people. But among the poor agrarian landscape, dark and deadly ideas for amassing wealth germinate. In the early 1990s, emerging from several decades of manmade and natural disasters, floods, and famine, its best resource was people, nearly 100 million living in a China operating under the notion that “to get rich is glorious.”

Among the cruelest of these schemes was the “plasma economy,” a government-backed campaign from 1991–1995 that encouraged farmers to sell their blood. Fearing the international AIDS epidemic and viewing its own citizens as disease-free, China banned imports of foreign blood products in 1985, just as disease experts began to understand HIV and AIDS were transmitted through blood.

Modern medicine requires blood, and importantly, blood plasma, which makes albumin, an injection vital after surgery and for trauma victims. It is also used in medications for hemophilia and immune system disorders. And plasma is a big-money business — and a deeply controversial one — worldwide. Giving plasma is more time-consuming and painful than donating blood, so fewer people contribute for free, and it attracts people who need quick money: In the 1990s, inmates in United States prisons were pulled into a plasma donation schemes; today, Mexican citizens cross into the United States to border town plasma collection stations.

Though the donors of Henan got a pittance for their blood, middlemen grew relatively wealthy on what was believed to be a pure, untainted plasma supply. Plasma traders worked to convince Chinese people traditionally opposed to giving blood — thought to be the essence of life — to sell it. Villages were festooned with red sloganeering banners: “Stick out an arm, show a vein, open your hand and make a fist, 50 kuai” (at the time, about $6), “If you want a comfortable standard of living, go sell your plasma,” and “To give plasma is an honor.”

Local officials in some places went on television, telling farmers that selling plasma would maintain healthy blood pressure. (It doesn’t.) Traders pressured families, especially women. Since females bleed every month, the cracked reasoning went, they could spare a few pints for extra income.

Though some villages were spared, often thanks to foresight of skeptical local leaders, Henan’s poorest places, especially those with bad farmland, jumped into the blood trade with gusto. Henan officially had around 200 licensed blood and plasma collection stations; it had thousands of illegal ones. Collection stations were overwhelmed. Needles were reused time and again, as were medical tubes and bags. Sometimes, stations sped up the process by pooling blood, unknowingly re-injecting people with HIV-tainted red blood cells.

The system became a perfect delivery vehicle for HIV. Thousands upon thousands of the farmers who sold plasma to supplement meager earnings left with a viral bomb that developed into AIDS. In the years before education and life-extending antiretroviral drugs, it was a death sentence.

As Gao made her discoveries, another doctor, Wang Shuping, was finding the epidemic further south in Henan. Both tried to get provincial health officials to act, to warn people about the risk of AIDS via blood donations and transfusions, and to shut down the system. Both say their bosses and government officials told them to keep quiet.

For several years, Gao, Wang, and other doctors spoke out, but the scandal was hushed up. When people started getting sick and dying en masse, the epidemic became harder to hide.

As soon as she began making her discoveries, Gao started giving public lectures, printing AIDS education pamphlets for villagers, and speaking to the press. Still, local officials managed to keep the news contained for a few years.

By 1999, some brave Chinese investigative reporters started writing about the plasma economy and AIDS epidemic. In 2000, international media seized on the story, and Gao became a favorite media subject, seemingly unafraid, always willing to provide detailed statics and talk about what she had found in the hidden epidemic.

Gao and the other doctors finally convinced China to ban plasma-for-cash programs and shut down unlicensed blood collection centers, but the damage was already done to thousands infected with HIV and hepatitis. (And despite the reforms, smaller illegal plasma operations still continued to pop up in rural villages.) This was not without pushback: Gao was threatened, blocked from speaking, had her own photos of AIDS victims confiscated, and believes her phone was tapped for years. Then there were the young men who followed her everywhere, forcing her to sneak out to do her work in rural areas under cover of night.

Gao continued to work to educate rural people about the disease and push for legal rights for victims. She inspired dozens of young volunteers, like the activist Hu Jia, to travel to Henan to donate money, food, and clothing over the years. But as the government tightened its controls and increased threats, volunteers stopped going. Gao, targeted more than most, kept sneaking in. She traveled undercover, visiting families and orphans and passing out her pamphlets.

Her charity embarrassed local officials who weren’t doing the job, and several became enraged. In one particular AIDS village, Gao learned the mayor had put a 500 yuan ($82) bounty on her head. Any villager who caught her in town and told police would get the huge sum. In all the years she visited, donated, and brought journalists in to investigate, Gao says, “they didn’t even try to catch me, they didn’t want to turn me in.”

Gao focused her attention, and her own family’s bank account, on the AIDS orphans, chastising the government to admit what had happened and make reparations. For that she became a target, as did those who accepted her gifts. Local officials wanted credit for helping AIDS victims, though according to her, most did very little.

“I gave them money,” she says, nodding toward a photo of a young woman. “She sold blood at age 16 and died at 22. I gave her 100 kuai ($16). If you gave them money and other things, they had to say it came from the government; they would have to thank the Communist Party.”

China has never provided a full accounting of the infection rate and death toll from the plasma disaster in Henan and surrounding provinces. Low estimates say 50,000 people contracted the virus through selling blood; many more sources put the number at at least 1 million. Another million may have contracted HIV through transfusions of the contaminated blood. Gao believes as many as 10 million people might have been infected, but she is alone in that high estimate.

China recently acknowledged AIDS is its leading cause of death among infectious diseases. In 2011, a joint U.N.–Chinese government report estimated 780,000 people in China are living with HIV, just 6.6% of them infected via the plasma trade, in Henan and three surrounding provinces. The real numbers are subject to debate and almost certainly higher, say global health experts. That figure also includes China’s original, larger AIDS epidemic that entered from Burma into Yunnan province along the drug trade route in 1989, about which the government has been much more open. There is no way to trace how many of China’s acknowledged AIDS cases are linked to the Henan plasma disaster. This is not an accident.

“You understand the situation?” Gao asks. “One thing is lying and the other is cheating. Fraud. From top to bottom, you cannot believe in government officials at any level. Cheating, lying, and fraud are what they do.”

Link: Forever Alone: Why Loneliness Matters in the Social Age

I got up and went over and looked out the window. I felt so lonesome, all of a sudden. I almost wished I was dead. Boy, did I feel rotten. I felt so damn lonesome. I just didn’t want to hang around any more. It made me too sad and lonesome.

— J.D. Salinger in Catcher in the Rye

Loneliness was a problem I experienced most poignantly in college. In the three years I spent at Carnegie Mellon, the crippling effects of loneliness slowly pecked away at my enthusiasm for learning and for life, until I was drowning in an endless depressive haze that never completely cleared until I left Pittsburgh.

It wasn’t for lack of trying either. At the warm behest of the orientation counselors, I joined just the right number of clubs, participated in most of the dorm activities, and tried to expand my social portfolio as much as possible.

None of it worked.

To the extent that I sought out CAPS (our student psych and counseling service) for help, the platitudes they offered as advice (“Just put yourself out there!”) only served to confirm my suspicion that loneliness isn’t a very visible problem. (After all, the cure for loneliness isn’t exactly something that could be prescribed. “Have you considered transferring?” they finally suggested, after exhausting their list of thought-terminating clichés. I graduated early instead.)

As prolonged loneliness took its toll, I became very unhappy—to put it lightly—and even in retrospect I have difficulty pinpointing a specific cause. It wasn’t that I didn’t know anyone or failed to make any friends, and it wasn’t that I was alonemore than I liked.

Sure, I could point my finger at the abysmally fickle weather patterns of Pittsburgh, or the pseudo-suburban bubble that envelops the campus. There might even be a correlation between my academic dissonance with computer science and my feelings of loneliness. I might also just be an extremely unlikable person.

For whatever the reason (or a confluence thereof) the reality remained that I struggled with loneliness throughout my time in college.

+++

I recall a conversation with my friend Dev one particular evening on the patio of our dormitory. It was the beginning of my junior and last year at CMU, and I had just finished throwing an ice cream party for the residents I oversaw as an RA.

“Glad to be back?” he asked as he plopped down on a lawn chair beside me.

“No, not really.”

The sun was setting, and any good feelings about the upcoming semester with it. We made small talk about the school in general, as he had recently transferred, but eventually Dev asked me if I was happy there.

“No, not really.”

“Why do you think you’re so miserable here?”

“I don’t know. A lot of things, I guess. But mostly because I feel lonely. Like I don’t belong, like I can’t relate to or connect with anyone on an emotional level. I haven’t made any quality relationships here that I would look back on with any fond memories. Fuck… I don’t know what to do.”

College, at least for me, was a harrowing exercise in how helplessly debilitating, hopelessly soul-crushing, and at times life-threatening loneliness could be. It’s a problem nobody talks about, and it’s been a subject of much personal relevance and interest.

Loneliness as a Health Problem

A recent article published on Slate outlines the hidden dangers of social isolation. Chronic loneliness, as Jessica Olien discovered, poses serious health risks that not only impact mental health but physiological well-being as well.

The lack of quality social relationships in a person’s life has been linked to an increased mortality risk comparable to smoking and alcohol consumption and exceeds the influence of other risk factors like physical inactivity and obesity. It’s hard to brush off loneliness as a character flaw or an ephemeral feeling when you realize it kills more people than obesity.

Research also shows that loneliness diminishes sleep quality and impairs physiological function, in some cases reducing immune function and boosting inflammation, which increases risk for diabetes and heart disease.

Why hasn’t loneliness gotten much attention as a medical problem? Olien shares the following observation:

As a culture we obsess over strategies to prevent obesity. We provide resources to help people quit smoking. But I have never had a doctor ask me how much meaningful social interaction I am getting. Even if a doctor did ask, it is not as though there is a prescription for meaningful social interaction.

As a society we look down upon those who admit to being lonely, we cast and ostracize them with labels like “loners” insofar as they prefer to hide behind shame and doubt rather than speak up. This dynamic only makes it harder to devise solutions to what is clearly a larger societal issue, and it certainly brings to question the effects of culture on our perception of loneliness as a problem.

Loneliness as a Culture Problem

Stephen Fry, in a blog post titled Only the Lonely which explains his suicide attempt last year, describes in detail his struggle with depression. His account offers a rare and candid glimpse into the reality of loneliness with which those afflicted often hide from the public:

Lonely? I get invitation cards through the post almost every day. I shall be in the Royal Box at Wimbledon and I have serious and generous offers from friends asking me to join them in the South of France, Italy, Sicily, South Africa, British Columbia and America this summer. I have two months to start a book before I go off to Broadway for a run of Twelfth Night there.

I can read back that last sentence and see that, bipolar or not, if I’m under treatment and not actually depressed, what the fuck right do I have to be lonely, unhappy or forlorn? I don’t have the right. But there again I don’t have the right not to have those feelings. Feelings are not something to which one does or does not have rights.

In the end loneliness is the most terrible and contradictory of my problems.

In the United States, approximately 60 million people, or 20% of the population, feel lonely. According to the General Social Survey, between 1985 and 2004, the number of people with whom the average American discusses important mattersdecreased from three to two, and the number with no one to discuss important matters with tripled.

Modernization has been cited as a reason for the intensification of loneliness in every society around the world, attributed to greater migration, smaller household sizes, and a larger degree of media consumption.

In Japan, loneliness is an even more pervasive, layered problem mired in cultural parochialisms. Gideon Lewis-Kraus pens a beautiful narrative on Harper’s in which he describes his foray into the world of Japanese co-sleeping cafés:

“Why do you think he came here, to the sleeping café?”

“He wanted five-second hug maybe because he had no one to hug. Japan ishaji culture. Shame. Is shame culture. Or maybe also is shyness. I don’t know why. Tokyo people … very alone. And he does not have … ” She thought for a second, shrugged, reached for her phone. “Please hold moment.”

She held it close to her face, multitouched the screen not with thumb and forefinger but with tiny forefinger and middle finger. I could hear another customer whispering in Japanese in the silk-walled cubicle at our feet. His co-sleeper laughed loudly, then laughed softly. Yukiko tapped a button and shone the phone at my face. The screen said COURAGE.

It took an enormous effort for me to come to terms with my losing battle with loneliness and the ensuing depression at CMU, and an even greater leap of faith to reach out for help. (That it was to no avail is another story altogether.) But what is even more disconcerting to me is that the general stigma against loneliness and mental health issues, hinging on an unhealthy stress culture, makes it hard for afflicted students to seek assistance at all.

As Olien puts it, “In a society that judges you based on how expansive your social networks appear, loneliness is difficult to fess up to. It feels shameful.”

To truly combat loneliness from a cultural angle, we need to start by examining our own fears about being alone and to recognize that as humans, loneliness is often symptomatic of our unfulfilled social needs. Most importantly, we need to accept that it’s okay to feel lonely. Fry, signing off on his heartfelt post, offers this insight:

Loneliness is not much written about (my spell-check wanted me to say that loveliness is not much written about—how wrong that is) but humankind is a social species and maybe it’s something we should think about more than we do.

Loneliness as a Technology Problem

Technology, and by extension media consumption in the Internet age, adds the most perplexing (and perhaps the most interesting) dimension to the loneliness problem. As it turns out, technology isn’t necessarily helping us feel more connected; in some cases, it makes loneliness worse.

The amount of time you spend on Facebook, as a recent study found, is inversely related to how happy you feel throughout the day.

Take a moment to watch this video.

It’s a powerful, sombering reminder that our growing dependence on technology to communicate has serious social repercussions, to which Cohen presents his central thesis:

We are lonely, but we’re afraid of intimacy, while the social networks offer us three gratifying fantasies: 1) That we can put our attention wherever we want it to be. 2) That we will always be heard. 3) That we will never have to be alone.

And that third idea, that we will never have to be alone, is central to changing our psyches. It’s shaping a new way of being. The best way to describe it is:

I share, therefore I am.

Public discourse on the cultural ramifications of technology is certainly not a recent development, and the general sentiment that our perverse obsession with sharing will be humanity’s downfall continues to echo in various forms around the web: articles proclaiming that Instagram is ruining people’s lives, the existence of a section on Reddit called cringepics where people congregate to ridicule things others post on the Internet, the increasing number of self-proclaimed “social media gurus” on Twitter, to name a few.

The signs seem to suggest we have reached a tipping point for “social” media that’s not very social on a personal level, but whether it means a catastrophic implosion or a gradual return to more authentic forms of interpersonal communications remains to be seen.

While technology has been a source of social isolation for many, it has the capacity to alleviate loneliness as well. A study funded by the online dating site eHarmony shows that couples who met online are less likely to divorce and achieve more marital satisfaction than those who met in real life.

The same model could potentially be applied to friendships, and it’s frustrating to see that there aren’t more startups leveraging this opportunity when the problem is so immediate and in need of solutions. It’s a matter of exposure and education on the truths of loneliness, and unfortunately we’re just not there yet.

+++

The perils of loneliness shouldn’t be overlooked in an increasingly hyperconnected world that often tells another story through rose-tinted lenses. Rather, the gravity of loneliness should be addressed and brought to light as a multifaceted problem, one often muted and stigmatized in our society. I learned firsthand how painfully real of a problem loneliness could be, and more should be done to spread its awareness and to help those affected.

“What do you think I should do?” I looked at Dev as the last traces of sunlight teetered over the top of Morewood Gardens. It was a rhetorical question—things weren’t about to get better.

“Find better people,” he replied.

I offered him a weak smile in return, but little did I know then how prescient those words were.

In the year that followed, I started a fraternity with some of the best kids I’d come to know (Dev included), graduated college and moved to San Francisco, made some of the best friends I’ve ever had, and never looked back, if only to remember, and remember well, that it’s never easy being lonely.

Link: Antibiotics, Capitalism and the Failure of the Market

Last March 2013, England’s Chief Medical Officer, Dame Sally Davies gave the stark warning that antimicrobial resistance poses “a catastrophic threat” Unless we act now, she argued, “any one of us could go into hospital in 20 years for minor surgery and die because of an ordinary infection that can’t be treated by antibiotics. And routine operations like hip replacements or organ transplants could be deadly because of the risk of infection.”[1]

Over billions of years, bacteria have encountered a multitude of naturally occurring antibiotics and consequentially developed resistance mechanisms to survive. The primary emergence of resistance is random, coming about by DNA mutation or gene exchange with other bacteria. However, the further use of antibiotics then favours the spread of those bacteria that have become resistant.

More than 70% of pathogenic bacteria that cause healthcare acquired infections are resistant to at least one the drugs most commonly used to treat them.[2][3] Increasing resistance in bacteria like Eschericha coli (E. coli) is a growing public health concern due to the very limited therapy options for infections caused by E. coli. This is particularly so in E .coli that is resistant to carbapenem antibiotics, the drugs of last resort.

The emergence of resistance is complex issue involving inappropriate and over use of antimicrobials in humans and animals. Antibiotics may be administered by health professionals or farmers when they are not required or patients may take only part of a full course of treatment. This provides bacteria the opportunity to encounter the otherwise life-saving drugs, at ineffective levels and survive mutation to produce resistant strains. Once created, these resistant strains have been allowed to spread by poor infection control and regional surveillance procedures.

These two problems are easily solved by educating healthcare professionals, patients and animal keepers about the importance of antibiotic treatment regimens and keeping to them. Advocating good infection control procedures in hospitals and investment in surveillance programs monitoring patterns of resistance locally and across the country would reduce the spread of infection. However, the biggest problem is capitalism and the fact that there is not a supply of new antimicrobials.

Between 1929 and the 1970s pharmaceutical companies developed more than twenty new classes of antimicrobials.[4][5] Since the 1970s only two new categories of antimicrobials have arrived.[6][7] Today the pipeline for new antibiotic classes active against highly resistant Gram negative bacteria is dry [8][9][10] the only novel category in early clinical development has recently been withdrawn.[9][11]

For the last seventy years the human race has kept itself ahead of resistant bacteria by going back into the laboratory and developing the next generation of antimicrobials. However, due to a failure of the market, pharmaceutical companies are no longer interested in developing antibiotics.

Despite the warnings from Dame Sally Davies, drug companies have pulled back from antimicrobial research because there is no profit to be made from it. When used appropriately a single £100 course of antibiotics will save someone’s life. However, that clinical effectiveness and short-term use has the unfortunate consequence of making antimicrobials significantly less profitable than the pharmaceuticals used in cancer therapy, which can cost £20,000 per year.

In our current system, a drug company’s return on their financial investment in antimicrobials is dependent on their volume of sales. A further problem arises when we factor in the educational programs aimed at teaching healthcare professionals and animal keepers to limit their use of antimicrobials. This combined with the relative unprofitability has produced a failure in the market and a paradox for capitalism.

A response commonly proposed by my fellow scientists, is that our government must provide incentives for pharmaceutical companies to develop new antimicrobial drugs. Suggestions are primarily focused around reducing the financial risk for drugs companies and include grants, prizes, tax breaks, creating public-private partnerships and increasing intellectual property protections. Further suggestions are often related to removing “red tape” and streamlining the drug approval and clinical trial requirements.

In September 2013 the Department of Health published its UK Five Year Antimicrobial Resistance Strategy.[12] The document called for “work to reform and harmonise regulatory regimes relating to the licencing and approval of antibiotics”, better collaboration “encouraging greater public-private investment in the discovery and development of a sustainable supply of effective new antimicrobials” and states that “Industry has a corporate and social responsibility to contribute to work to tackle antimicrobial resistance.”

I think we should have three major objections to these statements. One, the managers in the pharmaceutical industry do not have any responsibility to contribute to work to tackle antimicrobial resistance. They have a responsibility to practice within the law or be fined and make profit for shareholders or be replaced. It is the state that has the responsibility for the protection and wellbeing of its citizens.

Secondly, following this year’s horsemeat scandal we should object to companies cutting corners in attempt to increase profits. This leads on to the final objection, that by promoting public-private collaboration all the state is doing, is subsidising share holder profits by reducing the shareholder’s financial risk.

The market has failed and novel antimicrobials will require investment not based on a financial return from the volume of antibiotics sold but on the benefit for society of being free from disease.

John Maynard Keynes in his 1924, Sydney Ball Foundation Lecture at Cambridge, said “the important thing for government is not to do things which individuals are doing already, and to do them a little better or a little worse; but to do those things which at present are not done at all”.[13] Mariana Mazzucato in her 2013 book, The Entrepreneurial State, discusses how the state can lead innovation and criticises the risk and reward relationships in current public-private partnerships.[14] Mazzacuto argues that the state can be entrepreneurial and inventive and that we need to reinvent the state and government.

This praise of the potential of the state seems to be supported by the public as following announcements of energy price rises, in October 2013, a YouGov poll found that 12 to 1 people were against the NHS being run by the private sector; 67% in favour of Royal Mail being run in the public sector; 66% want railway companies to be nationalised and 68% are in favour of nationalised energy companies.[15]

We should support state funded professors, post-doctoral researchers and PhD students as scientists working within the public sector. They could study the mechanisms of drug entry into bacterial cells or screen natural antibiotic compounds. This could not be done on a shoestring budget and it would no doubt take years to build the infrastructure but we could do things like make the case for where the research took place.

Andrew Witty’s recent review of higher education and regional growth asked universities to become more involved in their local economies.[16] The state could choose to build laboratories in geographical areas neglected by private sector investment and help promote regional recovery. Even more radically, if novel antibiotics are produced for their social good rather than financial gain, they can be reserved indefinitely until a time of crisis.

With regard to democracy, patients and the general public could have a greater say in what is researched and to help shift us away from our reliance on the market to provide what society needs.  The market responds, not to what society needs, but to what will create the most profit. This is a reoccurring theme throughout science. I cannot begin to tell you how frequently I listen to case studies regarding parasites which only affect people in the developing world. Again, the people of the developing world have very little money so drug companies neglect to develop drugs as there is no source of profit. We should make the case for innovation not to be driven by greed but for the service of society and even our species.

Before Friedrich Hayek, John Desmond Bernal in his 1939 book, The Social Function to Science, argued for more spending on innovation as science was not merely an abstract intellectual enquiry but of real practical value.[17] Bernal placed science and technology as one of the driving forces of history. Why should we not follow that path?

Link: The War on Drugs Is Over. Drugs Won.

The world’s most extensive study of the drug trade has just been published in the medical journal BMJ Open, providing the first “global snapshot” of four decades of the war on drugs. You can already guess the result. The war on drugs could not have been a bigger failure. To sum up their most important findings, the average purity of heroin and cocaine have increased, respectively, 60 percent and 11 percent between 1990 and 2007. Cannabis purity is up a whopping 161 percent over that same time. Not only are drugs way purer than ever, they’re also way, way cheaper. Coke is on an 80 percent discount from 1990, heroin 81 percent, cannabis 86 percent. After a trillion dollars spent on the drug war, now is the greatest time in history to get high.

The new study only confirms what has been well-established for a decade at least, that trying to attack the drug supply is more or less pointless. The real question is demand, trying to mitigate its disastrous social consequences and treating the desire for drugs as a medical condition rather than as a moral failure. 

But there’s another question about demand that the research from BMJ Open poses. Why is there so much of it? No drug dealer ever worries about demand. Ever. The hunger for illegal drugs in America is assumed to be limitless. Why? One answer is that drugs feed a human despair that is equally limitless. And there is plenty of despair, no doubt. But the question becomes more complicated when you consider how many people are drugging themselves legally. In 2010 the CDC found that 48 percent of Americans used prescription drugs, 31 percent were taking two or more, and 11 percent were taking five or more. Two of the most common prescription drugs were stimulants, for adolescents, and anti-depressants, for middle-aged Americans.

Both the legal and illegal alteration of consciousness is at an all-time high. And it is quickly accelerating. One of the more interesting books published in the past year is Daniel Lieberman’s The Story of the Human Body: Evolution, Health, and Disease. It is a fascinating study by the chair of the department of human evolutionary biology at Harvard of how our Paleolithic natures, set in a hypermodern reality, are failing to adjust. His conclusions on the future of the species are somewhat dark:    

"We didn’t evolve to be healthy, but instead we were selected to have as many offspring as possible under diverse, challenging conditions. As a consequence, we never evolved to make rational choices about what to eat or how to exercise in conditions of abundance and comfort. What’s more, interactions between the bodies we inherited, the environments we create, and the decisions we sometimes make have set in motion an insidious feedback loop. We get sick from chronic diseases by doing what we evolved to do but under conditions for which our bodies are poorly adapted, and we then pass on those same conditions to our children, who also then get sick."

Our psychological reality is equally unadjusted to the world we live in. Cortisol levels — the stress hormone — evolved to increase during moments of crisis, like when a lion attacks. If you live in a city, your cortisol levels are constantly elevated. You’re always being chased. We are not built for that reality.

Lieberman’s solution is that we “respectfully and sensibly nudge, push, and sometimes oblige ourselves” to make healthier decisions, to live more in keeping with our biology and to adapt to the modern world with sensible, rational limits. But the mass demand for drugs — the boundless need to opiate and numb ourselves — shows that the simpler solution remains, and will no doubt remain, much more popular. Just take something.

Link: How I'm Going to Commit Suicide

A shockingly honest (and beautifully elegant) confession by Britain’s most celebrated art critic, Brian Sewell.

Every night I swallow a handful of pills. In the morning and during the day I swallow others,  haphazardly, for I am not always in the right place at the right time, but at night there is a ritual.

I undress. I clean my teeth. I wipe the mirror clear of splashes and see with some distaste the reflection of my decaying body, wondering that it ever had the impertinence to indulge in the pleasures of the flesh.

And then I take the pills. Some are for a heart that too often makes me feel that I have a misfiring single-cylinder diesel engine in my rib-cage.

Others are for the ordinary afflictions of age and still others ease the aches of old bones that creak and crunch. All in their way are poisons – that they do no harm is only a matter of dosage.

I intend, one day, to take an overdose. Not yet, for the experts at that friendly and understanding hospital, the Brompton in Kensington, manage my heart condition very well.

But the bone-rot will reach a point – not beyond endurance but beyond my willingness to endure it – when drugs prescribed to numb the pain so affect the functions of my brain that all the pleasures of music, art and books are dulled, and I merely exist.

An old buffer in a chair, sleeping and waking, sleeping and waking.

The thought of suicide is a great comfort, for it is what I shall employ if mere existence is ever all that I have. The difficulty will be that I must have the wit to identify the time, the weeks, the days, even  the critical moment (for it will not be long) between my recognising the need to end my life and the loss of my physical ability to carry out the plan.

There is a plan. I know exactly what I want to do and where I want to do it – not at home, not in my own bed. I shall write a note addressed ‘To whom it may concern’ explaining that I am committing suicide, that I am in sound mind, that no one else has been involved and, if I am discovered before my heart has stopped, I do not want to be resuscitated.

With this note in my pocket, I shall leave the house and totter off to a bench – foolishly installed by the local authority on a road so heavy with traffic that no one ever sits there – make myself comfortable and down as many pills as I can with a bottle of Bombay Gin, the only spirit that I like, to send them on their way.

With luck, no one will notice me for hours – and if they do, will think me an old drunk. Some unfortunate athlete will find me, stiff with rigor, on his morning jog.

I have left my cadaver to a teaching hospital for the use and abuse of medical students – and my sole misgiving is that, having filled it with poisons, I may have rendered it useless.

There are those who damn the suicide for invading the prerogative of the Almighty. Many years, however, have passed since I abandoned the beliefs, observances and irrational prejudices of Christianity, and I have no moral or religious inhibitions against suicide.

I cherish the notion of dying easily and with my wits about me. I am 82 tomorrow and do not want to die a dribbling dotard waiting for the Queen’s congratulatory greeting in 2031.

Nor do I wish to cling to an increasingly wretched life made unconscionable misery by acute or chronic pain and the humiliations of nursing.

What virtue can there be in suffering, in impotent wretchedness, in the bedpans and pisspots, the feeding with a spoon, the baby talk, the dwindling mind and the senses slipping in and out of consciousness?

For those so affected, dying is a prolonged and degrading misadventure. ‘We can ease the pain,’ says another of this interregnum between life and death. But what of those who want to hurry on?

Then the theologian argues that a man must not play God and determine his own end and prates of the purification of the soul through suffering and pain.

But what if the dying man is atheist or agnostic or has lost his faith – must he suffer life longer because of the prejudice of a Christian theologian? And has it occurred to no theologian that God himself might inspire the thought of suicide – or is that too great a heresy?

Link: The Obesity Era

As the American people got fatter, so did marmosets, vervet monkeys and mice. The problem may be bigger than any of us. 

Years ago, after a plane trip spent reading Fyodor Dostoyevsky’s Notes from the Underground and Weight Watchers magazine, Woody Allen melded the two experiences into a single essay. ‘I am fat,’ it began. ‘I am disgustingly fat. I am the fattest human I know. I have nothing but excess poundage all over my body. My fingers are fat. My wrists are fat. My eyes are fat. (Can you imagine fat eyes?).’ It was 1968, when most of the world’s people were more or less ‘height-weight proportional’ and millions of the rest were starving. Weight Watchers was a new organisation for an exotic new problem. The notion that being fat could spur Russian-novel anguish was good for a laugh.

That, as we used to say during my Californian adolescence, was then. Now, 1968’s joke has become 2013’s truism. For the first time in human history, overweight people outnumber the underfed, and obesity is widespread in wealthy and poor nations alike. The diseases that obesity makes more likely — diabetes, heart ailments, strokes, kidney failure — are rising fast across the world, and the World Health Organisation predicts that they will be the leading causes of death inall countries, even the poorest, within a couple of years. What’s more, the long-term illnesses of the overweight are far more expensive to treat than the infections and accidents for which modern health systems were designed. Obesity threatens individuals with long twilight years of sickness, and health-care systems with bankruptcy.

And so the authorities tell us, ever more loudly, that we are fat — disgustingly, world-threateningly fat. We must take ourselves in hand and address our weakness. After all, it’s obvious who is to blame for this frightening global blanket of lipids: it’s us, choosing over and over again, billions of times a day, to eat too much and exercise too little. What else could it be? If you’re overweight, it must be because you are not saying no to sweets and fast food and fried potatoes. It’s because you take elevators and cars and golf carts where your forebears nobly strained their thighs and calves. How could you dothis to yourself, and to society?

Moral panic about the depravity of the heavy has seeped into many aspects of life, confusing even the erudite. Earlier this month, for example, the American evolutionary psychologist Geoffrey Miller expressed the zeitgeist in this tweet: ‘Dear obese PhD applicants: if you don’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation. #truth.’ Businesses are moving to profit on the supposed weaknesses of their customers. Meanwhile, governments no longer presume that their citizens know what they are doing when they take up a menu or a shopping cart. Yesterday’s fringe notions are becoming today’s rules for living — such as New York City’s recent attempt to ban large-size cups for sugary soft drinks, or Denmark’s short-lived tax surcharge on foods that contain more than 2.3 per cent saturated fat, or Samoa Air’s 2013 ticket policy, in which a passenger’s fare is based on his weight because: ‘You are the master of your air ‘fair’, you decide how much (or how little) your ticket will cost.’

Several governments now sponsor jauntily named pro-exercise programmes such as Let’s Move! (US), Change4Life (UK) and actionsanté (Switzerland). Less chummy approaches are spreading, too. Since 2008, Japanese law requires companies to measure and report the waist circumference of all employees between the ages of 40 and 74 so that, among other things, anyone over the recommended girth can receive an email of admonition and advice.

Hand-in-glove with the authorities that promote self-scrutiny are the businesses that sell it, in the form of weight-loss foods, medicines, services, surgeries and new technologies. A Hong Kong company named Hapilabs offers an electronic fork that tracks how many bites you take per minute in order to prevent hasty eating: shovel food in too fast and it vibrates to alert you. A report by the consulting firm McKinsey & Co predicted in May 2012 that ‘health and wellness’ would soon become a trillion-dollar global industry. ‘Obesity is expensive in terms of health-care costs,’ it said before adding, with a consultantly chuckle, ‘dealing with it is also a big, fat market.’

And so we appear to have a public consensus that excess body weight (defined as a Body Mass Index of 25 or above) and obesity (BMI of 30 or above) are consequences of individual choice. It is undoubtedly true that societies are spending vast amounts of time and money on this idea. It is also true that the masters of the universe in business and government seem attracted to it, perhaps because stern self-discipline is how many of them attained their status. What we don’t know is whether the theory is actually correct.

Of course, that’s not the impression you will get from the admonishments of public-health agencies and wellness businesses. They are quick to assure us that ‘science says’ obesity is caused by individual choices about food and exercise. As the Mayor of New York, Michael Bloomberg, recently put it, defending his proposed ban on large cups for sugary drinks: ‘If you want to lose weight, don’t eat. This is not medicine, it’s thermodynamics. If you take in more than you use, you store it.’ (Got that? It’s not complicated medicine, it’s simple physics, the most sciencey science of all.)

Yet the scientists who study the biochemistry of fat and the epidemiologists who track weight trends are not nearly as unanimous as Bloomberg makes out. In fact, many researchers believe that personal gluttony and laziness cannot be the entire explanation for humanity’s global weight gain. Which means, of course, that they think at least some of the official focus on personal conduct is a waste of time and money. As Richard L Atkinson, Emeritus Professor of Medicine and Nutritional Sciences at the University of Wisconsin and editor of the International Journal of Obesity, put it in 2005: ‘The previous belief of many lay people and health professionals that obesity is simply the result of a lack of willpower and an inability to discipline eating habits is no longer defensible.’

Link: The Lethality of Loneliness

For the first time in history, we understand how isolation can ravage the body and brain. Now, what should we do about it?

Sometime in the late ’50s, Frieda Fromm-Reichmann sat down to write an essay about a subject that had been mostly overlooked by other psychoanalysts up to that point. Even Freud had only touched on it in passing. She was not sure, she wrote, “what inner forces” made her struggle with the problem of loneliness, though she had a notion. It might have been the young female catatonic patient who began to communicate only when Fromm-Reichmann asked her how lonely she was. “She raised her hand with her thumb lifted, the other four fingers bent toward her palm,” Fromm-Reichmann wrote. The thumb stood alone, “isolated from the four hidden fingers.” Fromm-Reichmann responded gently, “That lonely?” And at that, the woman’s “facial expression loosened up as though in great relief and gratitude, and her fingers opened.”

Fromm-Reichmann would later become world-famous as the dumpy little therapist mistaken for a housekeeper by a new patient, a severely disturbed schizophrenic girl named Joanne Greenberg. Fromm-Reichmann cured Greenberg, who had been deemed incurable. Greenberg left the hospital, went to college, became a writer, and immortalized her beloved analyst as “Dr. Fried” in the best-selling autobiographicalnovel I Never Promised You a Rose Garden (later also a movie and a pop song). Among analysts, Fromm-Reichmann, who had come to the United States from Germany to escape Hitler, was known for insisting that no patient was too sick to be healed through trust and intimacy. She figured that loneliness lay at the heart of nearly all mental illness and that the lonely person was just about the most terrifying spectacle in the world. She once chastised her fellow therapists for withdrawing from emotionally unreachable patients rather than risk being contaminated by them. The uncanny specter of loneliness “touches on our own possibility of loneliness,” she said. “We evade it and feel guilty.”

Her 1959 essay, “On Loneliness,” is considered a founding document in a fast-growing area of scientific research you might call loneliness studies. Over the past half-century, academic psychologists have largely abandoned psychoanalysis and made themselves over as biologists. And as they delve deeper into the workings of cells and nerves, they are confirming that loneliness is as monstrous as Fromm-Reichmann said it was. It has now been linked with a wide array of bodily ailments as well as the old mental ones.

In a way, these discoveries are as consequential as the germ theory of disease. Just as we once knew that infectious diseases killed, but didn’t know that germs spread them, we’ve known intuitively that loneliness hastens death, but haven’t been able to explain how. Psychobiologists can now show that loneliness sends misleading hormonal signals, rejiggers the molecules on genes that govern behavior, and wrenches a slew of other systems out of whack. They have proved that long-lasting loneliness not only makes you sick; it can kill you. Emotional isolation is ranked as high a risk factor for mortality as smoking. A partial list of the physical diseases thought to be caused or exacerbated by loneliness would include Alzheimer’s, obesity, diabetes, high blood pressure, heart disease, neurodegenerative diseases, and even cancer—tumors can metastasize faster in lonely people.

The psychological definition of loneliness hasn’t changed much since Fromm-Reichmann laid it out. “Real loneliness,” as she called it, is not what the philosopher Søren Kierkegaard characterized as the “shut-upness” and solitariness of the civilized. Nor is “real loneliness” the happy solitude of the productive artist or the passing irritation of being cooped up with the flu while all your friends go off on some adventure. It’s not being dissatisfied with your companion of the moment—your friend or lover or even spouse— unless you chronically find yourself in that situation, in which case you may in fact be a lonely person. Fromm-Reichmann even distinguished “real loneliness” from mourning, since the well-adjusted eventually get over that, and from depression, which may be a symptom of loneliness but is rarely the cause. Loneliness, she said—and this will surprise no one—is the want of intimacy.

Today’s psychologists accept Fromm-Reichmann’s inventory of all the things that loneliness isn’t and add a wrinkle she would surely have approved of. They insist that loneliness must be seen as an interior, subjective experience, not an external, objective condition. Loneliness “is not synonymous with being alone, nor does being with others guarantee protection from feelings of loneliness,” writes John Cacioppo, the leading psychologist on the subject. Cacioppo privileges the emotion over the social fact because—remarkably—he’s sure that it’s the feeling that wreaks havoc on the body and brain. Not everyone agrees with him, of course. Another school of thought insists that loneliness is a failure of social networks. The lonely get sicker than the non-lonely, because they don’t have people to take care of them; they don’t have social support.

To the degree that loneliness has been treated as a matter of public concern in the past, it has generally been seen as a social problem—the product of an excessively conformist culture or of a breakdown in social norms. Nowadays, though, loneliness is a public health crisis. The standard U.S. questionnaire, the UCLA Loneliness Scale, asks 20 questions that run variations on the theme of closeness—“How often do you feel close to people?” and so on. As many as 30 percent of Americans don’t feel close to people at a given time.

Loneliness varies with age and poses a particular threat to the very old, quickening the rate at which their faculties decline and cutting their lives shorter. But even among the not-so-old, loneliness is pervasive. In a survey published by the AARP in 2010, slightly more than one out of three adults 45 and over reported being chronically lonely (meaning they’ve been lonely for a long time). A decade earlier, only one out of five said that. With baby-boomers reaching retirement age at a rate of 10,000 a day, the number of lonely Americans will surely spike.

Obviously, the sicker lonely people get, the more care they’ll need. This is true, and alarming, although as we learn more about loneliness, we’ll also be better able to treat it. But to me, what’s most momentous about the new biology of loneliness is that it offers concrete proof, obtained through the best empirical means, that the poets and bluesmen and movie directors who for centuries have deplored the ravages of lonesomeness on both body and soul were right all along. As W. H. Auden put it, “We must love one another or die.”

Link: Caring on Stolen Time: A Nursing Home Diary

I work in a place of death. People come here to die, and my co-workers and I care for them as they make their journeys. Sometimes these transitions take years or months. Other times, they take weeks or some short days. I count the time in shifts, in scheduled state visits, in the sham monthly meetings I never attend, in the announcements of the “Employee of the Month” (code word for best ass-kisser of the month), in the yearly pay increment of 20 cents per hour, and in the number of times I get called into the Human Resources office.

The nursing home residents also have their own rhythms. Their time is tracked by scheduled hospital visits; by the times when loved ones drop by to share a meal, to announce the arrival of a new grandchild, or to wait anxiously at their bedsides for heart-wrenching moments to pass. Their time is measured by transitions from processed food to pureed food, textures that match their increasing susceptibility to dysphagia. Their transitions are also measured by the changes from underwear to pull-ups and then to diapers. Even more than the loss of mobility, the use of diapers is often the most dreaded adaptation. For many people, lack of control over urinary functions and timing is the definitive mark of the loss of independence.

Many of the elderly I have worked with are, at least initially, aware of the transitions and respond with a myriad of emotions from shame and anger to depression, anxiety, and fear. Theirs was the generation that survived the Great Depression and fought the last “good war.” Aging was an anti-climactic twist to the purported grandeur and tumultuousness of their mid-twentieth-century youth.

“I am afraid to die. I don’t know where I will go,” a resident named Lara says to me, fear dilating her eyes.

“Lara, you will go to heaven. You will be happy,” I reply, holding the spoonful of pureed spinach to her lips. “Tell me about your son, Tobias.”

And so Lara begins, the same story of Tobias, of his obedience and intelligence, which I have heard over and over again for the past year. The son whom she loves, whose teenage portrait stands by her bedside. The son who has never visited, but whose name and memory calm Lara.

Lara is always on the lookout, especially for Alba and Mary, the two women with severe dementia who sit on both sides of her in the dining room. To find out if Alba is enjoying her meal, she will look to my co-worker Saskia to ask, “Is she eating? If she doesn’t want to, don’t force her to eat. She will eat when she is hungry.” Alba, always cheerful, smiles. Does she understand? Or is she in her usual upbeat mood? “Lara, Alba’s fine. With you watching out for her, of course she’s OK!” We giggle. These are small moments to be cherished.

In the nursing home, such moments are precious because they are accidental moments.

The residents run on stolen time. Alind, like me, a certified nursing assistant (CNA), comments, “Some of these residents are already dead before they come here.”

By “dead,” he is not referring to the degenerative effects of dementia and Alzheimer’s disease but to the sense of hopelessness and loneliness that many of the residents feel, not just because of physical pain, not just because of old age, but as a result of the isolation, the abandonment by loved ones, the anger of being caged within the walls of this institution. This banishment is hardly the ending they toiled for during their industrious youth.

By death, Alind was also referring to the many times “I’m sorry,” is uttered in embarrassment and the tearful shrieks of shame that sometimes follow when they soil their clothes. This is the dying to which we, nursing home workers, bear witness every day; the death that the home is expected, somehow, to reverse.

So management tries, through bowling, through bingo and checkers, through Frank Sinatra sing-a-longs, to resurrect what has been lost to time, migration, the exigencies of the market, and the capriciousness of life. They substitute hot tea and cookies with strangers for the warmth of family and friends. Loved ones occupied by the same patterns of migration, work, ambition, ease their worries and guilt with pictures and reports of their relatives in these settings. We, the CNAs, shuffle in and out of these staged moments, to carry the residents off for toileting. The music playing in the building’s only bright and airy room is not for us, the immigrants, the lower hands, to plan for or share with the residents. Ours is a labor confined to the bathroom, to the involuntary, lower functions of the body. Instead of people of color in uniformed scrubs, white women with pretty clothes are paid more to care for the leisure-time activities of the old white people. The monotony and stress of our tasks are ours to bear alone.

The nursing home bosses freeze the occasional, carefully selected, picture-perfect moments on the front pages of their brochures, exclaiming that their facility, one of a group of Catholic homes is, indeed, a place where ”life is appreciated,” where “we care for the dignity of the human person.” In reality, they have not tried to make that possible. Under poor conditions, we have improvised for genuine human connection to exist. How we do that the bosses do not understand.

Link: Hands Off

Why are a bunch of men quitting masturbation? So they can be better men.

Traditionally, people undergo a bit of self-examination when faced with a ­potentially fatal rupture in their long-term relationship. Thirty-two-year-old Henry* admits that what he did was a little more extreme. “If you’d told me that I wasn’t going to masturbate for 54 days, I would have told you to fuck off,” he says.

Masturbation had been part of Henry’s daily routine since childhood. Although he remembered a scandalized babysitter who “found me trying to have sex with a chair” at age 5, Henry says he never felt shame about his habit. While he was of the opinion that a man who has a committed sexual relationship with porn was probably not going to have as successful a relationship with a woman, he had no qualms about watching it. Which he did most days.

Then, early last year and shortly before his girlfriend of two years moved to Los Angeles, Henry happened to watch a TED talk by the psychologist Philip Zimbardo called “The Demise of Guys.” It described males who “prefer the asynchronistic Internet world to the spontaneous interactions in social relationships” and therefore fail to succeed in school, work, and with women. When his girlfriend left, Henry went on to watch a TEDX talk by Gary Wilson, an anatomist and physiologist, whose lecture series, “Your Brain on Porn,” claims, among other things, that porn conditions men to want constant variety—an endless set of images and fantasies—and requires them to experience increasingly heightened stimuli to feel aroused. A related link led Henry to a community of people engaged in attempts to quit masturbation on the social news site Reddit. After reading the ­enthusiastic posts claiming improved virility, Henry began frequenting the site.

“The main thing was seeing people who said, ‘I feel awesome,’ ” he says. Henry did not feel awesome. He felt burned out from work and physically exhausted, and his girlfriend had just moved across the country. He had a few sexual concerns, too, though nothing serious, he insists. In his twenties, he sometimes had difficulty ejaculating during one-night stands if he had been drinking. On two separate occasions, he had not been able to get an erection. He wasn’t sure that forswearing masturbation would solve any of this, but stopping for a while seemed like “a not-difficult experiment”—far easier than giving up other things people try to quit, like caffeine or alcohol.

He also felt some responsibility for what had happened to his relationship. “When a guy feels like he’s failed with respect to a woman, that’s one of the things that causes you to examine yourself.” If he had been a better boyfriend or even a better man, he thought, perhaps his girlfriend wouldn’t have left New York.

So a month after his girlfriend moved away, and a few weeks before taking a trip to visit her, Henry went to the gym a lot. He had meditated for years, but he began to do so with more discipline and intention. He researched strategies to relieve insomnia, to avoid procrastination, and to be more conscious of his daily habits. These changes were not only for his girlfriend. “It was about cultivating a masculine energy that I wanted to apply in other parts of my life and with her,” he says.

And to help cultivate that masculine energy, he decided to quit masturbating. He erased a corner of the white board in his home office and started a tally of days, always using Roman numerals. “That way,” he says, “it would mean more.”

For those who seek fulfillment in the renunciation of benign habits, masturbation isn’t usually high on the list. It’s variously a privilege, a right, an act of political assertion, or one of the purest and most inconsequential pleasures that exist. Doctors assert that it’s healthy. Therapists recommend it. (Henry once talked to his therapist after a bad sexual encounter; she told him to masturbate. “Love yourself,” she said.)

And despite a century passing since Freud declared auto­eroticism a healthy phase of childhood sexual development and Egon Schiele drew pictures of people touching themselves, masturbation has become the latest frontier in the school of self-improvement. Today’s anti-masturbation advocates deviate from anti-onanists past—that superannuated medley of Catholic ascetics, boxers, Jean-Jacques Rousseau, and Norman Mailer. Instead, the members of the current generation tend to be young, self-aware, and secular. They bolster their convictions online by quoting studies indicating that ejaculation leads to decreased testosterone and vitamin levels (a drop in zinc, specifically). They cull evidence implying that excessive porn-viewing can reduce the number of dopamine receptors. Even the occasional woman can be found quitting (although some women partake of a culture of encouragement around masturbation, everything from a direct-sales sex-toy party at a friend’s house to classes with sex educator Betty Dodson, author of Sex for One).

Link: Why an MRI costs $1,080 in the US & $280 in France

There is a simple reason health care in the United States costs more than it does anywhere else: The prices are higher.

That may sound obvious. But it is, in fact, key to understanding one of the most pressing problems facing our economy. In 2009, Americans spent $7,960 per person on health care. Our neighbors in Canada spent $4,808. The Germans spent $4,218. The French, $3,978. If we had the per-person costs of any of those countries, America’s deficits would vanish. Workers would have much more money in their pockets. Our economy would grow more quickly, as our exports would be more competitive.

There are many possible explanations for why Americans pay so much more. It could be that we’re sicker. Or that we go to the doctor more frequently. But health researchers have largely discarded these theories. As Gerard Anderson, Uwe Reinhardt, Peter Hussey and Varduhi Petrosyan put it in the title of their influential 2003 study on international health-care costs, “it’s the prices, stupid.”

As it’s difficult to get good data on prices, that paper blamed prices largely by eliminating the other possible culprits. They authors considered, for instance, the idea that Americans were simply using more health-care services, but on close inspection, found that Americans don’t see the doctor more often or stay longer in the hospital than residents of other countries. Quite the opposite, actually. We spend less time in the hospital than Germans and see the doctor less often than the Canadians.

“The United States spends more on health care than any of the other OECD countries spend, without providing more services than the other countries do,” they concluded. “This suggests that the difference in spending is mostly attributable to higher prices of goods and services.”

On Friday, the International Federation of Health Plans — a global insurance trade association that includes more than 100 insurers in 25 countries — released more direct evidence. It surveyed its members on the prices paid for 23 medical services and products in different countries, asking after everything from a routine doctor’s visit to a dose of Lipitor to coronary bypass surgery. And in 22 of 23 cases, Americans are paying higher prices than residents of other developed countries. Usually, we’re paying quite a bit more. The exception is cataract surgery, which appears to be costlier in Switzerland, though cheaper everywhere else.

Prices don’t explain all of the difference between America and other countries. But they do explain a big chunk of it. The question, of course, is why Americans pay such high prices — and why we haven’t done anything about it.

“Other countries negotiate very aggressively with the providers and set rates that are much lower than we do,” Anderson says. They do this in one of two ways. In countries such as Canada and Britain, prices are set by the government. In others, such as Germany and Japan, they’re set by providers and insurers sitting in a room and coming to an agreement, with the government stepping in to set prices if they fail.

Health care is an unusual product in that it is difficult, and sometimes impossible, for the customer to say “no.” In certain cases, the customer is passed out, or otherwise incapable of making decisions about her care, and the decisions are made by providers whose mandate is, correctly, to save lives rather than money.

In America, Medicare and Medicaid negotiate prices on behalf of their tens of millions of members and, not coincidentally, purchase care at a substantial markdown from the commercial average. But outside that, it’s a free-for-all. Providers largely charge what they can get away with, often offering different prices to different insurers, and an even higher price to the uninsured.

In other cases, there is more time for loved ones to consider costs, but little emotional space to do so — no one wants to think there was something more they could have done to save their parent or child. It is not like buying a television, where you can easily comparison shop and walk out of the store, and even forgo the purchase if it’s too expensive. And imagine what you would pay for a television if the salesmen at Best Buy knew that you couldn’t leave without making a purchase.

“In my view, health is a business in the United States in quite a different way than it is elsewhere,” says Tom Sackville, who served in Margaret Thatcher’s government and now directs the IFHP. “It’s very much something people make money out of. There isn’t too much embarrassment about that compared to Europe and elsewhere.”

The result is that, unlike in other countries, sellers of health-care services in America have considerable power to set prices, and so they set them quite high. Two of the five most profitable industries in the United States — the pharmaceuticals industry and the medical device industry — sell health care. With margins of almost 20 percent, they beat out even the financial sector for sheer profitability.

Link: The Extraordinary Science of Addictive Junk Food

The public and the food companies have known for decades now — or at the very least since this meeting — that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.