Sunshine Recorder

Link: The Cultural History of Pain

Speculation about the degree to which human beings and animals experienced pain has a long history.

On 16 April 1872, a woman signing herself “An Earnest Eng­lishwoman” published a letter in the Times. It was entitled “Are Women Animals?”.

She was clearly very angry. Her fury had been fuelled by recent court cases in which a man who had “coolly knocked out” the eye of his mistress and another man who had killed his wife were imprisoned for just a few months each. In contrast, a man who had stolen a watch was punished severely, sentenced to not only seven years’ penal servitude, but also 40 lashes of the “cat”. She noted that although some people might believe that a watch was an “object of greater value than the eye of a mistress or the life of a wife”, she was asking readers to remember that “the inanimate watch does not suffer”. It must cause acute agony for any “living creature, endowed with nerves and muscles, to be blinded or crushed to death”.

Indeed, she continued, she had “read of heavier sentences being inflicted for cruelty towards that – may I venture to say? – lower creation”. She pleaded for women to be subsumed under legislation forbidding cruelty to animals, because that would improve their position in law.

Speculation about the degree to which human beings and animals experienced pain has a long history, but “An Earnest Englishwoman” was writing at a very important time in these debates. Charles Darwin’s Descent of Man had been published the year before her letter, and his Expression of the Emotions in Man and Animals appeared in 1872. Both Darwin and “An Earnest Englishwoman” were addressing a central question that had intrigued theologians, scientists, philosophers, psychologists and other social commentators for centuries: how can we know how other people feel?

The reason this question was so important was that many people didn’t believe that all human beings (let alone non-human animals) were equally capable of suffering. Scientists and philosophers pointed to the existence of a hierarchy of sentience. Belief in a great “Chain of Being”, according to which everything in the universe was ranked from the highest to the lowest, is a fundamental tenet of western philosophy. One aspect of this Chain of Being involved the perception of sensation. There was a parallel great Chain of Feeling, which placed male Europeans at one end and slaves and animals at the other.

Of course, “An Earnest Englishwoman” was using satire to argue for greater rights for women. She was not accusing men of failing to acknowledge that women were capable of experiencing pain. Indeed, that much-maligned group of Victorian women – hysterics – was believed to be exquisitely sensitive to noxious stimuli. Rather, she was drawing attention to the way a lack of respect for the suffering of some people had a profound impact on their status in society. If the suffering of women were treated as seriously as the suffering of animals, she insisted, women’s lives would be better.

Although she does not discuss it in her short letter, the relationship between social status and perceptions of sentience was much more fraught for other groups within British and American societies. In particular, people who had been placed at the “lower” end of the Chain of Feeling paid an extremely high price for prejudices about their “inability” to feel. In many white middle-class and upper-class circles, slaves and “savages”, for instance, were routinely depicted as possessing a limited capacity to experience pain, a biological “fact” that conveniently diminished any culpability among their so-called superiors for acts of abuse inflicted on them. Although the author of Practical Rules for the Management and Medical Treatment of Negro Slaves, in the Sugar Colonies (1811) conceded that “the knife of the anatomist … has never been able to detect” anatomical differences between slaves and their white masters, he nevertheless contended that slaves were better “able to endure, with few expressions of pain, the accidents of nature”. This was providential indeed, given that they were subjected to so many “accidents of nature” while labouring on sugar-cane plantations.

Such beliefs were an important factor in imperial conquests. With voyeuristic curiosity, travellers and explorers often commented on what they regarded as exotic responses to pain by indigenous peoples. In Australia, newly arrived colonisers breathlessly maintained that Native Australians’ “endurance of pain” was “something marvellous”. Others used the theme as an excuse for mockery. For instance, the ability of New Zealand Maoris to bear pain was ascribed to their “vanity”. They were said to be so enamoured with European shoes that “when one of them was happy enough to become the possessor of a pair, and found that they were too small, he would not hesitate to chop off a toe or two, stanch the bleeding by covering the stump with a little hemp, and then force the feet [sic] into the boots”.

But what was it about the non-European body that allegedly rendered it less suscep­tible to painful stimuli? Racial sciences placed great emphasis on the development and complexity of the brain and nerves. As the author of Pain and Sympathy (1907) concluded, attempting to explain why the “savage” could “bear physical torture without shrinking”: the “higher the life, the keener is the sense of pain”.

There was also speculation that the civilising process itself had rendered European peoples more sensitive to pain. The cele­brated American neurologist Silas Weir Mitchell stated in 1892 that in the “process of being civilised we have won … intensified capacity to suffer”. After all, “the savage does not feel pain as we do: nor as we examine the descending scale of life do animals seem to have the acuteness of pain-sense at which we have arrived”.

Some speculated whether the availability of anaesthetics and analgesics had an effect on people’s ability (as well as willingness) to cope with acute affliction. Writing in the 1930s, the distinguished pain surgeon René Leriche argued fervently that Europeans had become more sensitive to pain. Unlike earlier in the century, he claimed, modern patients “would not have allowed us to cut even a centimetre … without administering an anaesthetic”. This was not due to any decline of moral fibre, Leriche added: rather, it was a sign of a “nervous system differently developed, and more sensitive”.

Other physicians and scientists of the 19th and early 20th centuries wanted to complicate the picture by making a distinction between pain perception and pain reaction. But this distinction was used to denigrate “outsider” groups even further. Their alleged insensitivity to pain was proof of their humble status – yet when they did exhibit pain reactions, their sensitivity was called “exaggerated” or “hysterical” and therefore seen as more evidence of their inferiority. Such confused judgements surfaced even in clinical literature that purported to repudiate value judgements. For instance, John Finney was the first president of the American College of Surgeons. In his influential book The Significance and Effect of Pain (1914), he amiably claimed:

It does not always follow that because a patient bears what appears to be a great amount of pain with remarkable fortitude, that that individual is more deserving of credit or shows greater self-control than the one who does not; for it is a well-established fact that pain is not felt to the same degree by all individuals alike.

However, in the very same section, Finney made pejorative statements about people with a low pain threshold (they possessed a “yellow streak”, he said) and insisted that patients capable of bearing pain showed “wonderful fortitude”.

In other words, civilised, white, professional men might be exquisitely sensitive to pain but, through acts of willpower, they were capable of masking their reaction. In contrast, Finney said, the dark-skinned and the uneducated might bear “a great amount of pain with remarkable fortitude” but they did not necessarily deserve credit for it.

It was acknowledged that feeling pain was influenced by emotional and psychological states. The influence of “mental factors” on the perception of pain had been observed for centuries, especially in the context of religious torture. Agitation, ecstasy and ideological fervour were known to diminish (or even eliminate) suffering.

This peculiar aspect of pain had been explored most thoroughly in war. Military lore held that the “high excitement” of combat lessened the pain of being wounded. Even Lucretius described how when

the scythed chariots, reeking with indiscriminate slaughter, suddenly chop off the limbs … such is the quickness of the injury and the eagerness of the man’s mind that he cannot feel the pain; and because his mind is given over to the zest of battle, maimed though he be, he plunges afresh into the fray and the slaughter.

Time and again, military observers have noted how, in the heat of battle, wounded men might not feel even severe wounds. These anecdotal observations were confirmed by a systematic study carried out during the Second World War. The American physician Henry K Beecher served in combat zones on the Venafro and Cassino fronts in Italy. He was struck by how there was no necessary correlation between the seriousness of any specific wound and the men’s expressions of suffering: perhaps, he concluded, the strong emotions aroused in combat were responsible for the absence of acute pain – or the pain might also be alleviated by the knowledge that wartime wounding would release a soldier from an exceedingly dangerous environment.

Beecher’s findings were profoundly influential. As the pain researchers Harold Wolff and Stewart Wolf found in the 1950s, most people perceived pain at roughly similar intensities, but their threshold for reaction varied widely: it “depends in part upon what the sensation means to the individual in the light of his past experiences”.

Away from the battlefield, debates about the relative sensitivity of various people were not merely academic. The seriousness of suffering was calibrated according to such characterisations. Sympathy was rationed unevenly.

Myths about the lower susceptibility of certain patients to painful stimuli justified physicians prescribing fewer and less effective analgesics and anaesthetics. This was demonstrated by the historian Martin Pernick in his work on mid-19th-century hospitals. In A Calculus of Suffering (1985), Pernick showed that one-third of all major limb amputations at the Pennsylvania Hospital between 1853 and 1862 had been done without any anaesthetic, even though it was available. Distinguished surgeons such as Frank Hamilton carried out more than one-sixth of all non-military amputations on fully conscious patients.

This is not simply peculiar to earlier centuries. For instance, the belief that infants were not especially liable to experiencing pain (or that indications of suffering were merely reflexes) was prominent for much of the 20th century and had profound effects on their treatment. Painful procedures were routinely carried out with little, if any, anaesthetic or analgesic. Max Thorek, the author of Modern Surgical Technique (1938), claimed that “often no anaesthetic is required”, when operating on young infants: indeed, “a sucker consisting of a sponge dipped in some sugar water will often suffice to calm the baby”.

As “An Earnest Englishwoman” recognised, beliefs about sentience were linked to ideas of who was considered fully human. Slaves, minority groups, the poor and others in society could also be dispossessed politically, economically and socially on the grounds that they did not feel as much as others. The “Earnest Englishwoman’s” appeal – which drew from a tradition of respect and consideration that lays emphasis on the capacity to suffer – is one that has been echoed by the oppressed and their supporters throughout the centuries.

Link: The AIDS Granny In Exile

In the ’90s, a gynecologist named Gao Yaojie exposed the horrifying cause of an AIDS epidemic in rural China — and the ensuing cover-up — and became an enemy of the state. Now 85, she lives in New York without her family, without her friends, and without regrets.

The enormous brick fortress in West Harlem was built in the mid-1970s as a visionary housing project, a new model for an affordable, self-contained urban community. Today, on a balmy September afternoon, it is a low-income housing compound lined with security cameras, guards, and triple-locked doors. A few drunks shouting at nobody in particular linger outside. Pound for pound, though, the most dangerous person living here may just be a diminutive 85-year-old Chinese grandmother dressed in a stylish purple sweater set with black leopard spots sent by her daughter in Canada.

This is not a slum. Neither is it where you would expect to find an internationally known human-rights warrior living out her golden years. In her one-bedroom apartment, Dr. Gao Yaojie — known to many as “the AIDS Granny” — moves with great difficulty through her tidy clutter and stacks of belongings. In the small kitchen, she stirs a pot of rice and bean porridge, one of the few things she can digest. She lost most of her stomach in surgery after a suicide attempt four decades ago and suffered multiple beatings during the Cultural Revolution.

A large bed where Gao’s live-in caretaker sleeps overwhelms the living room. In Gao’s bedroom, two twin beds are piled with stacks of books, photos and quilts. Her desk is heaped with papers, medications, and yet more books. Gao’s computer is always on, often clutched to her chest as she lies working in bed.

“I left China with one thing in each hand,” Gao says to me in Chinese. “A blood-pressure cuff to monitor my high blood pressure and a USB stick with more than a thousand pictures of AIDS victims.”

Before she agreed to meet me at all, she set rules via email: There would be no discussion of China’s politics, the Communist Party’s future, or the myriad issues that concern other dissidents. These are inexorably tied to her own life, but Gao does not want to be known as a multipurpose Chinese dissident. A lifetime of looking over her shoulder for danger has left her wary. She never learned English.

“I seldom see anyone,” she says. “Many people from China are very complicated. I don’t know what kind of intentions they have. I see them as cheating to get food, drinks, and money. They don’t really do any meaningful work.”

Gao believes she is watched here, just as she was in China for so many years. Given China’s well-documented pattern of stifling critical voices abroad, it’s impossible to rule out that someone is monitoring or harassing her, even in Harlem.

Money is tight. She had a fellowship through Columbia University for her first year in the U.S. Now she gets by on private donations that cover roughly $35,000 a year in expenses, the largest of those being her rent at Riverside. She has a few teeth left and can’t afford dental work.

She spends her days in bed, sleeping, writing, researching online, and obsessively analyzing what she witnessed in China in a lifetime that bridged tremendous tumult. For hours, she clicks away on her keyboard, emailing contacts back home for information and putting final touches on her newest book. She learned to use a computer at age 69.

This will be Gao’s 27th book and the ninth to chronicle China’s AIDS epidemic, a public health catastrophe that decimated entire villages and put her on the government’s enemy list. “You wouldn’t understand the earlier books, they were too technical,” she says, flashing a near-toothless grin.

“Although I am by myself, appearing to be lonely, I am actually very busy,” she says. “I am turning 86 soon and will be gone, but I will leave these things to the future generations.”

Her unplanned journey from Henan province to Harlem began 17 years ago, six months after she retired as a gynecologist and professor at the Henan Chinese Medicine University hospital in Zhengzhou. She went from being a retired grandmother to China’s first and most famous AIDS activist, and became such a thorn in the side of the regime that she eventually fled to New York for safety, away from her family and everyone she knows.

She turns to her computer and pulls up a photo of a gravely ill woman with an incision up her abdomen. Gao did not set out to become a dissident.

“I didn’t do this because I wanted to become involved in politics,” she says. “I just saw that the AIDS patients were so miserable. They were so miserable.”

In April 1996, Gao, then 69, was called from retirement to consult on a difficult case. A 42-year-old woman, Ms. Ba, had had ovarian surgery and was not getting better: Her stomach was bloated, she had a high fever and strange lesions on her skin. She grew sicker and her doctors were stumped. After finding no routine infection or illness, Gao demanded an AIDS test for the young mother.

Gao knew from her work that AIDS had entered Henan, the heartland Chinese province. Yet her colleagues scoffed: How could a simple farmer have AIDS? China had only a handful of confirmed cases. The government said AIDS was a disease of foreigners, spread through illicit drugs and promiscuous sex.

Gao insisted on a test. The results came back; Ms. Ba had AIDS. Her husband and children tested negative, which puzzled the doctors further. The patient was not a drug addict nor a prostitute, so Gao began to investigate. She determined the source was a government blood bank — Ms. Ba’s post-surgical blood transfusion infected her with HIV. “I realized the seriousness of the problem,” Gao later wrote. “If the blood in the blood bank carried the AIDS virus, then these victims would not be a small number.”

With no treatment available, Ms. Ba died within two weeks. Her husband, Gao remembers, spread a cot on the ground in front of her tomb and slept there for weeks in mourning.

Witnessing his grief launched Gao on a relentless campaign. She began investigating AIDS in Zhengzhou and nearby villages, conducting blood tests, compiling data, and trying to educate farmers about the risks carried by blood donations and transfusions.

Over months and years, her research into the epidemic took her across much of rural China. What she found astounded her: villages with infection rates of 20, 30, 40% or more; whole communities of AIDS orphans, zero treatment options, and little awareness of what was sickening and killing a generation of farmers. Worse, the population did not know how the disease spread. The numbers of how many were infected and died remain secret, the officially released data almost universally believed to be far too low.

Gao had finally found the cause. “Even now, the government is lying, saying AIDS was transmitted because of drug use,” she says. “The government officials were very good at lying.”

The breadbasket of China, Henan is cut by the Yellow River and its seasonal, devastating floods. Through generations of extreme poverty, it developed a reputation as a place where people lie, cheat, and steal. In reality, rural Henan is not unlike Middle America, with its sweeping, open pastures, peaceful landscapes, and hardworking people. But among the poor agrarian landscape, dark and deadly ideas for amassing wealth germinate. In the early 1990s, emerging from several decades of manmade and natural disasters, floods, and famine, its best resource was people, nearly 100 million living in a China operating under the notion that “to get rich is glorious.”

Among the cruelest of these schemes was the “plasma economy,” a government-backed campaign from 1991–1995 that encouraged farmers to sell their blood. Fearing the international AIDS epidemic and viewing its own citizens as disease-free, China banned imports of foreign blood products in 1985, just as disease experts began to understand HIV and AIDS were transmitted through blood.

Modern medicine requires blood, and importantly, blood plasma, which makes albumin, an injection vital after surgery and for trauma victims. It is also used in medications for hemophilia and immune system disorders. And plasma is a big-money business — and a deeply controversial one — worldwide. Giving plasma is more time-consuming and painful than donating blood, so fewer people contribute for free, and it attracts people who need quick money: In the 1990s, inmates in United States prisons were pulled into a plasma donation schemes; today, Mexican citizens cross into the United States to border town plasma collection stations.

Though the donors of Henan got a pittance for their blood, middlemen grew relatively wealthy on what was believed to be a pure, untainted plasma supply. Plasma traders worked to convince Chinese people traditionally opposed to giving blood — thought to be the essence of life — to sell it. Villages were festooned with red sloganeering banners: “Stick out an arm, show a vein, open your hand and make a fist, 50 kuai” (at the time, about $6), “If you want a comfortable standard of living, go sell your plasma,” and “To give plasma is an honor.”

Local officials in some places went on television, telling farmers that selling plasma would maintain healthy blood pressure. (It doesn’t.) Traders pressured families, especially women. Since females bleed every month, the cracked reasoning went, they could spare a few pints for extra income.

Though some villages were spared, often thanks to foresight of skeptical local leaders, Henan’s poorest places, especially those with bad farmland, jumped into the blood trade with gusto. Henan officially had around 200 licensed blood and plasma collection stations; it had thousands of illegal ones. Collection stations were overwhelmed. Needles were reused time and again, as were medical tubes and bags. Sometimes, stations sped up the process by pooling blood, unknowingly re-injecting people with HIV-tainted red blood cells.

The system became a perfect delivery vehicle for HIV. Thousands upon thousands of the farmers who sold plasma to supplement meager earnings left with a viral bomb that developed into AIDS. In the years before education and life-extending antiretroviral drugs, it was a death sentence.

As Gao made her discoveries, another doctor, Wang Shuping, was finding the epidemic further south in Henan. Both tried to get provincial health officials to act, to warn people about the risk of AIDS via blood donations and transfusions, and to shut down the system. Both say their bosses and government officials told them to keep quiet.

For several years, Gao, Wang, and other doctors spoke out, but the scandal was hushed up. When people started getting sick and dying en masse, the epidemic became harder to hide.

As soon as she began making her discoveries, Gao started giving public lectures, printing AIDS education pamphlets for villagers, and speaking to the press. Still, local officials managed to keep the news contained for a few years.

By 1999, some brave Chinese investigative reporters started writing about the plasma economy and AIDS epidemic. In 2000, international media seized on the story, and Gao became a favorite media subject, seemingly unafraid, always willing to provide detailed statics and talk about what she had found in the hidden epidemic.

Gao and the other doctors finally convinced China to ban plasma-for-cash programs and shut down unlicensed blood collection centers, but the damage was already done to thousands infected with HIV and hepatitis. (And despite the reforms, smaller illegal plasma operations still continued to pop up in rural villages.) This was not without pushback: Gao was threatened, blocked from speaking, had her own photos of AIDS victims confiscated, and believes her phone was tapped for years. Then there were the young men who followed her everywhere, forcing her to sneak out to do her work in rural areas under cover of night.

Gao continued to work to educate rural people about the disease and push for legal rights for victims. She inspired dozens of young volunteers, like the activist Hu Jia, to travel to Henan to donate money, food, and clothing over the years. But as the government tightened its controls and increased threats, volunteers stopped going. Gao, targeted more than most, kept sneaking in. She traveled undercover, visiting families and orphans and passing out her pamphlets.

Her charity embarrassed local officials who weren’t doing the job, and several became enraged. In one particular AIDS village, Gao learned the mayor had put a 500 yuan ($82) bounty on her head. Any villager who caught her in town and told police would get the huge sum. In all the years she visited, donated, and brought journalists in to investigate, Gao says, “they didn’t even try to catch me, they didn’t want to turn me in.”

Gao focused her attention, and her own family’s bank account, on the AIDS orphans, chastising the government to admit what had happened and make reparations. For that she became a target, as did those who accepted her gifts. Local officials wanted credit for helping AIDS victims, though according to her, most did very little.

“I gave them money,” she says, nodding toward a photo of a young woman. “She sold blood at age 16 and died at 22. I gave her 100 kuai ($16). If you gave them money and other things, they had to say it came from the government; they would have to thank the Communist Party.”

China has never provided a full accounting of the infection rate and death toll from the plasma disaster in Henan and surrounding provinces. Low estimates say 50,000 people contracted the virus through selling blood; many more sources put the number at at least 1 million. Another million may have contracted HIV through transfusions of the contaminated blood. Gao believes as many as 10 million people might have been infected, but she is alone in that high estimate.

China recently acknowledged AIDS is its leading cause of death among infectious diseases. In 2011, a joint U.N.–Chinese government report estimated 780,000 people in China are living with HIV, just 6.6% of them infected via the plasma trade, in Henan and three surrounding provinces. The real numbers are subject to debate and almost certainly higher, say global health experts. That figure also includes China’s original, larger AIDS epidemic that entered from Burma into Yunnan province along the drug trade route in 1989, about which the government has been much more open. There is no way to trace how many of China’s acknowledged AIDS cases are linked to the Henan plasma disaster. This is not an accident.

“You understand the situation?” Gao asks. “One thing is lying and the other is cheating. Fraud. From top to bottom, you cannot believe in government officials at any level. Cheating, lying, and fraud are what they do.”

Link: Antibiotics, Capitalism and the Failure of the Market

Last March 2013, England’s Chief Medical Officer, Dame Sally Davies gave the stark warning that antimicrobial resistance poses “a catastrophic threat” Unless we act now, she argued, “any one of us could go into hospital in 20 years for minor surgery and die because of an ordinary infection that can’t be treated by antibiotics. And routine operations like hip replacements or organ transplants could be deadly because of the risk of infection.”[1]

Over billions of years, bacteria have encountered a multitude of naturally occurring antibiotics and consequentially developed resistance mechanisms to survive. The primary emergence of resistance is random, coming about by DNA mutation or gene exchange with other bacteria. However, the further use of antibiotics then favours the spread of those bacteria that have become resistant.

More than 70% of pathogenic bacteria that cause healthcare acquired infections are resistant to at least one the drugs most commonly used to treat them.[2][3] Increasing resistance in bacteria like Eschericha coli (E. coli) is a growing public health concern due to the very limited therapy options for infections caused by E. coli. This is particularly so in E .coli that is resistant to carbapenem antibiotics, the drugs of last resort.

The emergence of resistance is complex issue involving inappropriate and over use of antimicrobials in humans and animals. Antibiotics may be administered by health professionals or farmers when they are not required or patients may take only part of a full course of treatment. This provides bacteria the opportunity to encounter the otherwise life-saving drugs, at ineffective levels and survive mutation to produce resistant strains. Once created, these resistant strains have been allowed to spread by poor infection control and regional surveillance procedures.

These two problems are easily solved by educating healthcare professionals, patients and animal keepers about the importance of antibiotic treatment regimens and keeping to them. Advocating good infection control procedures in hospitals and investment in surveillance programs monitoring patterns of resistance locally and across the country would reduce the spread of infection. However, the biggest problem is capitalism and the fact that there is not a supply of new antimicrobials.

Between 1929 and the 1970s pharmaceutical companies developed more than twenty new classes of antimicrobials.[4][5] Since the 1970s only two new categories of antimicrobials have arrived.[6][7] Today the pipeline for new antibiotic classes active against highly resistant Gram negative bacteria is dry [8][9][10] the only novel category in early clinical development has recently been withdrawn.[9][11]

For the last seventy years the human race has kept itself ahead of resistant bacteria by going back into the laboratory and developing the next generation of antimicrobials. However, due to a failure of the market, pharmaceutical companies are no longer interested in developing antibiotics.

Despite the warnings from Dame Sally Davies, drug companies have pulled back from antimicrobial research because there is no profit to be made from it. When used appropriately a single £100 course of antibiotics will save someone’s life. However, that clinical effectiveness and short-term use has the unfortunate consequence of making antimicrobials significantly less profitable than the pharmaceuticals used in cancer therapy, which can cost £20,000 per year.

In our current system, a drug company’s return on their financial investment in antimicrobials is dependent on their volume of sales. A further problem arises when we factor in the educational programs aimed at teaching healthcare professionals and animal keepers to limit their use of antimicrobials. This combined with the relative unprofitability has produced a failure in the market and a paradox for capitalism.

A response commonly proposed by my fellow scientists, is that our government must provide incentives for pharmaceutical companies to develop new antimicrobial drugs. Suggestions are primarily focused around reducing the financial risk for drugs companies and include grants, prizes, tax breaks, creating public-private partnerships and increasing intellectual property protections. Further suggestions are often related to removing “red tape” and streamlining the drug approval and clinical trial requirements.

In September 2013 the Department of Health published its UK Five Year Antimicrobial Resistance Strategy.[12] The document called for “work to reform and harmonise regulatory regimes relating to the licencing and approval of antibiotics”, better collaboration “encouraging greater public-private investment in the discovery and development of a sustainable supply of effective new antimicrobials” and states that “Industry has a corporate and social responsibility to contribute to work to tackle antimicrobial resistance.”

I think we should have three major objections to these statements. One, the managers in the pharmaceutical industry do not have any responsibility to contribute to work to tackle antimicrobial resistance. They have a responsibility to practice within the law or be fined and make profit for shareholders or be replaced. It is the state that has the responsibility for the protection and wellbeing of its citizens.

Secondly, following this year’s horsemeat scandal we should object to companies cutting corners in attempt to increase profits. This leads on to the final objection, that by promoting public-private collaboration all the state is doing, is subsidising share holder profits by reducing the shareholder’s financial risk.

The market has failed and novel antimicrobials will require investment not based on a financial return from the volume of antibiotics sold but on the benefit for society of being free from disease.

John Maynard Keynes in his 1924, Sydney Ball Foundation Lecture at Cambridge, said “the important thing for government is not to do things which individuals are doing already, and to do them a little better or a little worse; but to do those things which at present are not done at all”.[13] Mariana Mazzucato in her 2013 book, The Entrepreneurial State, discusses how the state can lead innovation and criticises the risk and reward relationships in current public-private partnerships.[14] Mazzacuto argues that the state can be entrepreneurial and inventive and that we need to reinvent the state and government.

This praise of the potential of the state seems to be supported by the public as following announcements of energy price rises, in October 2013, a YouGov poll found that 12 to 1 people were against the NHS being run by the private sector; 67% in favour of Royal Mail being run in the public sector; 66% want railway companies to be nationalised and 68% are in favour of nationalised energy companies.[15]

We should support state funded professors, post-doctoral researchers and PhD students as scientists working within the public sector. They could study the mechanisms of drug entry into bacterial cells or screen natural antibiotic compounds. This could not be done on a shoestring budget and it would no doubt take years to build the infrastructure but we could do things like make the case for where the research took place.

Andrew Witty’s recent review of higher education and regional growth asked universities to become more involved in their local economies.[16] The state could choose to build laboratories in geographical areas neglected by private sector investment and help promote regional recovery. Even more radically, if novel antibiotics are produced for their social good rather than financial gain, they can be reserved indefinitely until a time of crisis.

With regard to democracy, patients and the general public could have a greater say in what is researched and to help shift us away from our reliance on the market to provide what society needs.  The market responds, not to what society needs, but to what will create the most profit. This is a reoccurring theme throughout science. I cannot begin to tell you how frequently I listen to case studies regarding parasites which only affect people in the developing world. Again, the people of the developing world have very little money so drug companies neglect to develop drugs as there is no source of profit. We should make the case for innovation not to be driven by greed but for the service of society and even our species.

Before Friedrich Hayek, John Desmond Bernal in his 1939 book, The Social Function to Science, argued for more spending on innovation as science was not merely an abstract intellectual enquiry but of real practical value.[17] Bernal placed science and technology as one of the driving forces of history. Why should we not follow that path?

Link: The Nazi Anatomists

Link: YOU HAVE DIED OF DYSENTERY

Children of the 70s and 80s will likely remember Oregon Trail, the computer game where the player assumes the role of wagon leader and guides a group of settlers through the pioneer landscape of 19th-century America. You would hunt bison, shoot rabbits, ford rivers and pick up other settlers as you made your way from Missouri to Oregon. But just as you really got into the game, this would happen.

If you are like me, you probably shouted: ‘NOT AGAIN!’

So what exactly is dysentery, and why did you and all your settlers keep dying from it in Oregon Trail?

Dysentery is an intestinal inflammation that causes severe diarrhea, usually characterised by mucus or blood in the feces. Left untreated, the disease can lead to rapid loss of fluids, dehydration, and eventually death.

There are two forms of dysentery. One is caused by a bacterium, the other, an amoeba. The former is the most common in Western Europe and the United States; and is typically spread through contaminated food and water.

Outbreaks of dysentery were more prevalent during war, where the disease spread rampantly because of the unhygienic conditions of the camps. During the Mexican War (1846-48), a staggering 88% of deaths were due to infectious disease, most of those overwhelmingly dysentery. For every man killed in battle, seven died of disease. The American Civil War was no better. You were more likely to die off the battlefield than on it, and dysentery was the primary cause.

That said, civilians also died of dysentery with some frequency in the 19th century, especially those who were itinerant. Pioneers travelling the Oregon Trail wouldn’t have faired much better than soldiers fighting in war. They would have travelled in large groups—wagon after wagon trailing one another—and their access to clean water and food would have been severely limited. In 1853, one pioneer wrote in her diary: ‘Still in camp, husband and myself being sick (caused, we suppose, by drinking the river water, as it looks like dirty suds than anything else)’.

Diseases such as tuberculosis, flu, measles and smallpox spread like wildfire through their crowded, makeshift camps. Dysentery would have been one of the leading causes of death amongst these pioneers, although it is difficult to determine just how many died from it as medical records were typically not kept.

What we do know is that roughly 20,000 people died travelling the 2,000-mile trail in the 19th century. To put that in perspective: there was an average of ten graves per mile. Burials were often hastily done right in the middle of the trail. This would allow wagons and animals to trample down the grave so that the scent of decomposition was erased and wolves wouldn’t feast on the remains.

In another diary from the period, one pioneer writes: ‘A grave three feet deep and wide enough to receive the eleven victims [of a massacre] was dug, and the bodies placed in it. Wolves excavated the grave and devoured the remains…[Volunteers] gathered up the bones, placed them in a wagon box, and again buried them.’

So there you have it.  Life on the Oregon Trail was just as rough as the computer game would have us believe. Food was scarce. Roads were treacherous. And disease was rampant.

I will never again complain about the inconveniences of air travel.

Link: Beyond Recognition

The incredible story of a face transplant.

… Like the patients who came before her, Tarleton’s journey has been something of an unfathomable one. In the summer of 2007, she was the victim of a brutal attack perpetrated by her ex-husband, Herbert Rodgers. He broke into her home in the dead of night, carrying a baseball bat and a bottle of industrial-strength lye. He used both, and he didn’t stop until Tarleton had sustained what one doctor later described as “the most horrific injury a human being could suffer.”

Tarleton awoke from a three-month induced coma in September of that year. Her body, marred by deep chemical burns, was wrapped in bandages and covered in grafts — some taken from cadavers, the rest harvested from her own legs. Her eyelids were gone, as was her left ear. She couldn’t blink, smile, or breathe through her nose.

During that coma, doctors performed 38 surgeries to repair what deficits they could. And over a period of five years, she would undergo another 17 operations, including a series of synthetic corneal implants that eventually restored partial vision to one eye. Despite these efforts, Tarleton’s progress eventually stalled — given the limitations of conventional procedures, it was impossible that full facial functions, from movement to sensation, would ever return. And her face, there was no question, would never look the way it had before. “I had forgotten what it was like to look more normal,” she says. “I had to accept that I would always look this way, and I had to be okay with that.”

Ironically, it wasn’t until Tarleton had cultivated this acceptance, she says, that the prospect of a face transplant emerged. In December of 2011, she received a striking proposition from Dr. Bohdan Pomahac at Brigham and Women’s Hospital in Boston. He had recently performed the first successful full face-transplant in the US, and he wanted to know if Tarleton would consider the procedure.

It wasn’t an easy answer. Before being approved for a face transplant, Tarleton would need to travel two hours from her home in Vermont to Boston, several times over several months, for extensive physical and psychological exams. Doctors needed to be sure that her immune system could cope with the procedure, and assess the blood vessels, nerves, and muscles deep within her skull. A team of psychological experts would evaluate Tarleton’s mental health and the strength of her support network. The procedure itself would be grueling and dangerous, and the rehabilitation process would be extensive. But the payoff — the prospect of eyes that could blink, a mouth able to kiss — would transform her life.

Several months after that call, Tarleton had cleared every hurdle, and her name was added to a waitlist while surgeons scoured for viable donors. To meet the criteria, a donor had to be brain dead with no prospect for recovery — the harvested tissue needs to be flushed with blood and nutrients until the last possible moment — and be an adequate match for Tarleton’s skin tone and texture, as well as her age and sex. In her case, it took 14 months before that donor, Cheryl, was found.

Link: The Obesity Era

As the American people got fatter, so did marmosets, vervet monkeys and mice. The problem may be bigger than any of us. 

Years ago, after a plane trip spent reading Fyodor Dostoyevsky’s Notes from the Underground and Weight Watchers magazine, Woody Allen melded the two experiences into a single essay. ‘I am fat,’ it began. ‘I am disgustingly fat. I am the fattest human I know. I have nothing but excess poundage all over my body. My fingers are fat. My wrists are fat. My eyes are fat. (Can you imagine fat eyes?).’ It was 1968, when most of the world’s people were more or less ‘height-weight proportional’ and millions of the rest were starving. Weight Watchers was a new organisation for an exotic new problem. The notion that being fat could spur Russian-novel anguish was good for a laugh.

That, as we used to say during my Californian adolescence, was then. Now, 1968’s joke has become 2013’s truism. For the first time in human history, overweight people outnumber the underfed, and obesity is widespread in wealthy and poor nations alike. The diseases that obesity makes more likely — diabetes, heart ailments, strokes, kidney failure — are rising fast across the world, and the World Health Organisation predicts that they will be the leading causes of death inall countries, even the poorest, within a couple of years. What’s more, the long-term illnesses of the overweight are far more expensive to treat than the infections and accidents for which modern health systems were designed. Obesity threatens individuals with long twilight years of sickness, and health-care systems with bankruptcy.

And so the authorities tell us, ever more loudly, that we are fat — disgustingly, world-threateningly fat. We must take ourselves in hand and address our weakness. After all, it’s obvious who is to blame for this frightening global blanket of lipids: it’s us, choosing over and over again, billions of times a day, to eat too much and exercise too little. What else could it be? If you’re overweight, it must be because you are not saying no to sweets and fast food and fried potatoes. It’s because you take elevators and cars and golf carts where your forebears nobly strained their thighs and calves. How could you dothis to yourself, and to society?

Moral panic about the depravity of the heavy has seeped into many aspects of life, confusing even the erudite. Earlier this month, for example, the American evolutionary psychologist Geoffrey Miller expressed the zeitgeist in this tweet: ‘Dear obese PhD applicants: if you don’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation. #truth.’ Businesses are moving to profit on the supposed weaknesses of their customers. Meanwhile, governments no longer presume that their citizens know what they are doing when they take up a menu or a shopping cart. Yesterday’s fringe notions are becoming today’s rules for living — such as New York City’s recent attempt to ban large-size cups for sugary soft drinks, or Denmark’s short-lived tax surcharge on foods that contain more than 2.3 per cent saturated fat, or Samoa Air’s 2013 ticket policy, in which a passenger’s fare is based on his weight because: ‘You are the master of your air ‘fair’, you decide how much (or how little) your ticket will cost.’

Several governments now sponsor jauntily named pro-exercise programmes such as Let’s Move! (US), Change4Life (UK) and actionsanté (Switzerland). Less chummy approaches are spreading, too. Since 2008, Japanese law requires companies to measure and report the waist circumference of all employees between the ages of 40 and 74 so that, among other things, anyone over the recommended girth can receive an email of admonition and advice.

Hand-in-glove with the authorities that promote self-scrutiny are the businesses that sell it, in the form of weight-loss foods, medicines, services, surgeries and new technologies. A Hong Kong company named Hapilabs offers an electronic fork that tracks how many bites you take per minute in order to prevent hasty eating: shovel food in too fast and it vibrates to alert you. A report by the consulting firm McKinsey & Co predicted in May 2012 that ‘health and wellness’ would soon become a trillion-dollar global industry. ‘Obesity is expensive in terms of health-care costs,’ it said before adding, with a consultantly chuckle, ‘dealing with it is also a big, fat market.’

And so we appear to have a public consensus that excess body weight (defined as a Body Mass Index of 25 or above) and obesity (BMI of 30 or above) are consequences of individual choice. It is undoubtedly true that societies are spending vast amounts of time and money on this idea. It is also true that the masters of the universe in business and government seem attracted to it, perhaps because stern self-discipline is how many of them attained their status. What we don’t know is whether the theory is actually correct.

Of course, that’s not the impression you will get from the admonishments of public-health agencies and wellness businesses. They are quick to assure us that ‘science says’ obesity is caused by individual choices about food and exercise. As the Mayor of New York, Michael Bloomberg, recently put it, defending his proposed ban on large cups for sugary drinks: ‘If you want to lose weight, don’t eat. This is not medicine, it’s thermodynamics. If you take in more than you use, you store it.’ (Got that? It’s not complicated medicine, it’s simple physics, the most sciencey science of all.)

Yet the scientists who study the biochemistry of fat and the epidemiologists who track weight trends are not nearly as unanimous as Bloomberg makes out. In fact, many researchers believe that personal gluttony and laziness cannot be the entire explanation for humanity’s global weight gain. Which means, of course, that they think at least some of the official focus on personal conduct is a waste of time and money. As Richard L Atkinson, Emeritus Professor of Medicine and Nutritional Sciences at the University of Wisconsin and editor of the International Journal of Obesity, put it in 2005: ‘The previous belief of many lay people and health professionals that obesity is simply the result of a lack of willpower and an inability to discipline eating habits is no longer defensible.’

Link: The Girl Who Turned to Bone

Unexpected discoveries in the quest to cure an extraordinary skeletal condition show how medically relevant rare diseases can be.

When Jeannie Peeper was born in 1958, there was only one thing amiss: her big toes were short and crooked. Doctors fitted her with toe braces and sent her home. Two months later, a bulbous swelling appeared on the back of Peeper’s head. Her parents didn’t know why: she hadn’t hit her head on the side of her crib; she didn’t have an infected scratch. After a few days, the swelling vanished as quickly as it had arrived.

When Peeper’s mother noticed that the baby couldn’t open her mouth as wide as her sisters and brothers, she took her to the first of various doctors, seeking an explanation for her seemingly random assortment of symptoms. Peeper was 4 when the Mayo Clinic confirmed a diagnosis: she had a disorder known as fibrodysplasia ossificans progressiva (FOP).

The name meant nothing to Peeper’s parents—unsurprising, given that it is one of the rarest diseases in the world. One in 2 million people have it.

Peeper’s diagnosis meant that, over her lifetime, she would essentially develop a second skeleton. Within a few years, she would begin to grow new bones that would stretch across her body, some fusing to her original skeleton. Bone by bone, the disease would lock her into stillness. The Mayo doctors didn’t tell Peeper’s parents that. All they did say was that Peeper would not live long.

“Basically, my parents were told there was nothing that could be done,” Peeper told me in October. “They should just take me home and enjoy their time with me, because I would probably not live to be a teenager.” We were in Oviedo, Florida, in an office with a long, narrow sign that read The International Fibrodysplasia Ossificans Progressiva Association. Peeper founded the association 25 years ago, and remains its president. She was dressed in a narrow-waisted black skirt and a black-and-white striped blouse. A large ring in the shape of a black flower encircled one of her fingers. Her hair was peach-colored.

Peeper sat in a hulking electric wheelchair tilted back at a 30-degree angle. Her arms were folded, like those of a teacher who has run out of patience. Her left hand was locked next to her right biceps. I could make out some of the bones under the skin of her left arm: long, curved, extraneous.

“It’s good to finally meet you,” she said when I walked in. Her face was almost entirely frozen; she spoke by drawing her lower lip down and out to the sides. Bones had immobilized her neck, so she had to look at me with a sidelong gaze. Her right hand, resting on her wheelchair’s joystick, contained the only free-moving joint in her body. It rose and swung toward me. We shook hands.

Peeper’s condition is extremely rare—but in that respect, she actually has a lot of company. A rare disease is defined as any condition affecting fewer than 200,000 patients in the United States. More than 7,000 such diseases exist, afflicting a total of 25 million to 30 million Americans.

The symptoms of these diseases may differ, but the people who suffer from them share many experiences. Rare diseases frequently go undiagnosed, or misdiagnosed, for years. Once people do find out that they suffer from a rare disease, many discover that medicine cannot help them. Not only is there no drug to prescribe, but in many cases, scientists have little idea of the underlying cause of the disease. And until recently, people with rare diseases had little reason to hope this would change. The medical-research establishment treated them as a lost cause, funneling resources to more-common ailments like cancer and heart disease.

In 1998, this magazine ran a story recounting the early attempts by scientists to understand fibrodysplasia ossificans progressiva. Since then, their progress has shot forward. The advances have come thanks in part to new ways of studying cells and DNA, and in part to Jeannie Peeper.

Starting in the 1980s, Peeper built a network of people with FOP. She is now connected to more than 500 people with her condition—a sizable fraction of all the people on Earth who suffer from it. Together, members of this community did what the medical establishment could not: they bankrolled a laboratory dedicated solely to FOP and have kept its doors open for more than two decades. They have donated their blood, their DNA, and even their teeth for study.

Meanwhile, the medical establishment itself has shifted its approach to rare diseases, figuring out ways to fund research despite the inherently limited audience. Combined with Peeper’s dedication, this sea change has enabled scientists to pinpoint the genetic mutation that causes her disease and to begin developing drugs that could treat, and possibly even cure, it.

Although rare diseases are still among the worst diagnoses to receive, it would not be a stretch to say there’s never been a better time to have one.

When Peeper’s parents received their daughter’s diagnosis, they didn’t tell her. She enjoyed a kickball-and-bicycles childhood in Ypsilanti, Michigan, and only became aware of her disorder when she was 8.

“I remember vividly, because I was getting dressed for Sunday school,” she told me. She realized that she could no longer fit her left hand through her sleeve. “My left wrist had locked in a backwards position”—the result of a new bone that had grown in her arm.

Peeper’s doctors took a muscle biopsy from her left forearm. Afterward, she wore a cast for six weeks. When it came off, she couldn’t flex her elbow. A new bone had frozen the joint.

Over the next decade, as Peeper grew more bones—rigid sheets stretching across her back, her right elbow locking, her left hip freezing—she became accustomed to pain.

Link: Paracetamol/Acetaminophen Can Soften Our Moral Reactions

Our moral reactions are easily influenced by a variety of factors. One of them isanxiety. When people are confronted with disturbing experiences like mortality salience (i.e., being made aware of their own eventual death), they tend to affirm their moral beliefs. As a result, they feel inclined to punish moral transgression more harshly than they would without feeling fundamentally threatened. For example, in a now classical study people who objected to prostitution were asked to suggest a penalty for a woman arrested for prostitution. Participants who were led to reflect on their own mortality beforehand proposed a far higher bail than participants who thought about a less anxiety inducing topic. Such belief affirmation effects can also be evoked by psychologically disturbing experiences less severe than mortality salience. Hence, anxiety aroused by different situations can make our moral reactions more pronounced.

Some days ago, an interesting study has been published in “Psychological Science”. The authors showed that the common over-the-counter pain reliever paracetamol counteracts the belief-affirming effect of anxiety. Participants who took a placebo showed the familiar response pattern in the “prostitution paradigm”. They suggested a harsher penalty for the prostitute under mortality salience (a bail of around $450) compared to a control condition (around $300). Participants who took paracetamol, however, didn’t react on mortality salience. Independent of what they had reflected on before, they suggested the same penalty for the prostitute (around $300). Paracetamol seems to have reduced the fundamental anxiety participants felt due to the mortality salience manipulation, so they didn’t have to affirm their moral beliefs that strongly. In a second experiment, the same effect of paracetamol was shown using a different disturbing experience (a surrealistic movie instead of mortality salience) and a different measurement for belief affirmation (a fine for rioters instead of a bail for a prostitute).

Hence, besides killing physical pain, paracetamol seems to be capable of counteracting the effect anxiety has on our moral reactions. From a scientific perspective, this certainly is an interesting finding. But what can we make out of it from a practical ethics perspective? If we want a person’s moral reaction to be the result of cognition rather than emotion, paracetamol could be a means for bias reduction. However, some people might argue that in case a person’s moral belief is the “correct” one, wanting transgressions to be punished comparatively severely might not be such a bad thing, even if the motivation for that is anxiety. 

Link: Caring on Stolen Time: A Nursing Home Diary

I work in a place of death. People come here to die, and my co-workers and I care for them as they make their journeys. Sometimes these transitions take years or months. Other times, they take weeks or some short days. I count the time in shifts, in scheduled state visits, in the sham monthly meetings I never attend, in the announcements of the “Employee of the Month” (code word for best ass-kisser of the month), in the yearly pay increment of 20 cents per hour, and in the number of times I get called into the Human Resources office.

The nursing home residents also have their own rhythms. Their time is tracked by scheduled hospital visits; by the times when loved ones drop by to share a meal, to announce the arrival of a new grandchild, or to wait anxiously at their bedsides for heart-wrenching moments to pass. Their time is measured by transitions from processed food to pureed food, textures that match their increasing susceptibility to dysphagia. Their transitions are also measured by the changes from underwear to pull-ups and then to diapers. Even more than the loss of mobility, the use of diapers is often the most dreaded adaptation. For many people, lack of control over urinary functions and timing is the definitive mark of the loss of independence.

Many of the elderly I have worked with are, at least initially, aware of the transitions and respond with a myriad of emotions from shame and anger to depression, anxiety, and fear. Theirs was the generation that survived the Great Depression and fought the last “good war.” Aging was an anti-climactic twist to the purported grandeur and tumultuousness of their mid-twentieth-century youth.

“I am afraid to die. I don’t know where I will go,” a resident named Lara says to me, fear dilating her eyes.

“Lara, you will go to heaven. You will be happy,” I reply, holding the spoonful of pureed spinach to her lips. “Tell me about your son, Tobias.”

And so Lara begins, the same story of Tobias, of his obedience and intelligence, which I have heard over and over again for the past year. The son whom she loves, whose teenage portrait stands by her bedside. The son who has never visited, but whose name and memory calm Lara.

Lara is always on the lookout, especially for Alba and Mary, the two women with severe dementia who sit on both sides of her in the dining room. To find out if Alba is enjoying her meal, she will look to my co-worker Saskia to ask, “Is she eating? If she doesn’t want to, don’t force her to eat. She will eat when she is hungry.” Alba, always cheerful, smiles. Does she understand? Or is she in her usual upbeat mood? “Lara, Alba’s fine. With you watching out for her, of course she’s OK!” We giggle. These are small moments to be cherished.

In the nursing home, such moments are precious because they are accidental moments.

The residents run on stolen time. Alind, like me, a certified nursing assistant (CNA), comments, “Some of these residents are already dead before they come here.”

By “dead,” he is not referring to the degenerative effects of dementia and Alzheimer’s disease but to the sense of hopelessness and loneliness that many of the residents feel, not just because of physical pain, not just because of old age, but as a result of the isolation, the abandonment by loved ones, the anger of being caged within the walls of this institution. This banishment is hardly the ending they toiled for during their industrious youth.

By death, Alind was also referring to the many times “I’m sorry,” is uttered in embarrassment and the tearful shrieks of shame that sometimes follow when they soil their clothes. This is the dying to which we, nursing home workers, bear witness every day; the death that the home is expected, somehow, to reverse.

So management tries, through bowling, through bingo and checkers, through Frank Sinatra sing-a-longs, to resurrect what has been lost to time, migration, the exigencies of the market, and the capriciousness of life. They substitute hot tea and cookies with strangers for the warmth of family and friends. Loved ones occupied by the same patterns of migration, work, ambition, ease their worries and guilt with pictures and reports of their relatives in these settings. We, the CNAs, shuffle in and out of these staged moments, to carry the residents off for toileting. The music playing in the building’s only bright and airy room is not for us, the immigrants, the lower hands, to plan for or share with the residents. Ours is a labor confined to the bathroom, to the involuntary, lower functions of the body. Instead of people of color in uniformed scrubs, white women with pretty clothes are paid more to care for the leisure-time activities of the old white people. The monotony and stress of our tasks are ours to bear alone.

The nursing home bosses freeze the occasional, carefully selected, picture-perfect moments on the front pages of their brochures, exclaiming that their facility, one of a group of Catholic homes is, indeed, a place where ”life is appreciated,” where “we care for the dignity of the human person.” In reality, they have not tried to make that possible. Under poor conditions, we have improvised for genuine human connection to exist. How we do that the bosses do not understand.


Do No Harm: On Body Integrity Disorder
Why do some people want to cut off a perfectly healthy limb?

This wasn’t the first time that David had tried to amputate his leg. When he was just out of college, he’d tried to do it using a tourniquet fashioned out of an old sock and strong baling twine.
David locked himself in his bedroom at his parents’ house, his bound leg propped up against the wall to prevent blood from flowing into it. After two hours the pain was unbearable, and fear sapped his will.
Undoing a tourniquet that has starved a limb of blood can be fatal: injured muscles downstream of the blockage flood the body with toxins, causing the kidneys to fail. Even so, David released the tourniquet himself; it was just as well that he hadn’t mastered the art of tying one.
Failure did not lessen David’s desire to be rid of the leg. It began to consume him, to dominate his awareness. The leg was always there as a foreign body, an impostor, an intrusion.
He spent every waking moment imagining freedom from the leg. He’d stand on his “good” leg, trying not to put any weight on the bad one. At home, he’d hop around. While sitting, he’d often push the leg to one side. The leg just wasn’t his. He began to blame it for keeping him single; but living alone in a small suburban townhouse, afraid to socialise and struggling to form relationships, David was unwilling to let anyone know of his singular fixation.
David is not his real name. He wouldn’t discuss his condition without the protection of anonymity. After he agreed to talk, we met in the waiting area of a nondescript restaurant, in a nondescript mall just outside one of America’s largest cities. A handsome man, David resembles a certain edgy movie star whose name, he fears, might identify him to his co-workers. He’s kept his secret well hidden: I am only the second individual whom he has confided to in person about his leg.
The cheerful guitar music in the restaurant lobby clashed with David’s mood. He choked up as he recounted his depression. I’d heard his voice cracking when we’d spoken earlier on the phone, but watching this grown man so full of emotion was difficult. The restaurant’s buzzer went off. Our table inside was ready, but David didn’t want to go in. Even though his voice was shaking, he wanted to keep talking.
“It got to the point where I’d come into my house and just cry,” he had told me earlier over the phone. “I’d be looking at other people and seeing that they already have their lives going good for them. And I’m stuck here, all miserable. I’m being held back by this strange obsession. The logic going through my head was that I need to take care of this now, because if I wait any longer, there is not much chance of a life for me.”
It took some time for David to open up. Early on, when we were just getting to know each other, he was shy and polite, confessing that he wasn’t very good at talking about himself. He had avoided seeking professional psychiatric help, afraid that doing so would somehow endanger his employment. And yet he knew that he was slipping into a dark place. He began associating his house with the feeling of being alone and depressed. Soon he came home only to sleep; he couldn’t be in the house during the day without breaking into tears.
One night about a year ago, when he could bear it no longer, David called his best friend. There was something he had been wanting to reveal his whole life, David told him. His friend’s response was empathetic — exactly what David needed. Even as David was speaking he began searching online for material. “He told me that there was something in my eyes the whole time I was growing up,” David said. “It looked like I had pain in my eyes, like there was something I wasn’t telling him.” Once David opened up, he discovered that he was not alone. He found a community on the internet of others who were also desperate to excise some part of their body — usually a limb, sometimes two. These people were suffering from what is now called Body Integrity Identity Disorder (BIID).
The online community has been a blessing to those who suffer from BIID, and through it many discover that their malaise has an official name. With a handful of websites and a few thousand members, the community even has its internal subdivisions: “devotees” are fascinated by or attracted to amputees, often sexually, but don’t want amputations themselves; “wannabes” strongly desire an amputation of their own. A further delineation, “need-to-be,” describes someone whose desire for amputation is particularly fierce.
It was a wannabe who told David about a former BIID patient who had been connecting other sufferers to a surgeon in Asia. For a fee, this doctor would perform off-the-book amputations. David contacted this gatekeeper on Facebook, but more than a month passed without a reply. As his hopes of surgery began to fade, David’s depression deepened. The leg intruded more insistently into his thoughts. He decided to try again to get rid of it himself.
This time he settled for dry ice, one of the preferred methods of self-amputation among the BIID community. The idea is to freeze the offending limb and damage it to the point that doctors have no choice but to amputate. David drove over to his local Walmart and bought two large trashcans. The plan was brutal, but simple. First, he would submerge the leg in a can full of cold water to numb it. Then he would pack it in a can full of dry ice until it was injured beyond repair.
He bought rolls of bandages, but he couldn’t find the dry ice or the prescription painkillers he needed if he was going to keep the leg in dry ice for eight hours. David went home despondent, with just two trashcans and bandages, preparing himself mentally to go out the next day to find the other ingredients. The painkillers were essential; he knew that without them he would never succeed. Then, before going to bed that night, he checked his computer.
There it was: a message. The gatekeeper wanted to talk.

We are only just beginning to understand BIID. It hasn’t helped that the medical establishment has generally dismissed the condition as a perversion. Yet there is evidence that it has existed for hundreds of years. In a recent paper, Peter Brugger, the head of neuropsychology at University Hospital Zurich, Switzerland, cites the case of an Englishman who went to France in the late 18th century and asked a surgeon to amputate his leg. When the surgeon refused, the Englishman held him up at gunpoint, forcing him to perform the operation. After returning home, he sent the surgeon 250 guineas and a letter of thanks, in which he wrote that his leg had been “an invincible obstacle” to his happiness.

The first modern account of the condition dates from 1977, whenThe Journal of Sex Research published a paper on “apotemnophilia” — the desire to be an amputee. The paper categorised the desire for amputation as a paraphilia, a catchall term used for deviant sexual desires. Although it’s true that most people who desire such amputations are sexually attracted to amputees, the term paraphilia has long been a convenient label for misunderstandings: after all, at one time homosexuality was also labelled as paraphilia.
One of the co-authors of the 1977 paper was Gregg Furth, who eventually became a practising psychologist in New York. Furth himself suffered from the condition and, over time, became a major figure in the BIID underground. He wanted to help people deal with their problem, but medical treatment was always controversial — often for good reason. In 1998, Furth introduced a friend to an unlicensed surgeon who agreed to amputate the friend’s leg in a Tijuana clinic. The patient died of gangrene and the surgeon was sent to prison. A Scottish surgeon named Robert Smith, who practised at the Falkirk and District Royal Infirmary, briefly held out legal hope for BIID sufferers by openly performing voluntary amputations, but a media frenzy in 2000 led British authorities to forbid such procedures. The Smith affair fuelled a series of articles about the condition — some suggesting that merely identifying and defining such a condition could cause it to spread, like a virus.
Undeterred, Furth found a surgeon in Asia who was willing to perform amputations for about $6,000. But instead of getting the surgery himself, he began acting as a go-between, putting sufferers in touch with the surgeon.
He also contacted Michael First, a clinical psychiatrist at Columbia University in New York. Intrigued, First embarked on a survey of 52 patients. What he found was illuminating. The patients all seemed to be obsessed by the thought of a body that was different in some way from the one they possessed. There seemed to be a mismatch between their internal sense of their own bodies and their physical bodies. First, who would later lobby to have BIID more widely recognised, became convinced that he was looking at a disorder of identity, of the sense of self.
“The name that was originally proposed, apotemnophilia, was clearly a problem,” he told me. “We wanted a word that was parallel to gender identity disorder. GID has built into the name a concept that there is a function called gender identity, which is your sense of being male or female, which has gone wrong. So, what would be a parallel notion? Body integrity identity disorder hypothesises that a normal function, which is your comfort in how your body fits together, has gone wrong.”

Do No Harm: On Body Integrity Disorder

Why do some people want to cut off a perfectly healthy limb?

This wasn’t the first time that David had tried to amputate his leg. When he was just out of college, he’d tried to do it using a tourniquet fashioned out of an old sock and strong baling twine.

David locked himself in his bedroom at his parents’ house, his bound leg propped up against the wall to prevent blood from flowing into it. After two hours the pain was unbearable, and fear sapped his will.

Undoing a tourniquet that has starved a limb of blood can be fatal: injured muscles downstream of the blockage flood the body with toxins, causing the kidneys to fail. Even so, David released the tourniquet himself; it was just as well that he hadn’t mastered the art of tying one.

Failure did not lessen David’s desire to be rid of the leg. It began to consume him, to dominate his awareness. The leg was always there as a foreign body, an impostor, an intrusion.

He spent every waking moment imagining freedom from the leg. He’d stand on his “good” leg, trying not to put any weight on the bad one. At home, he’d hop around. While sitting, he’d often push the leg to one side. The leg just wasn’t his. He began to blame it for keeping him single; but living alone in a small suburban townhouse, afraid to socialise and struggling to form relationships, David was unwilling to let anyone know of his singular fixation.

David is not his real name. He wouldn’t discuss his condition without the protection of anonymity. After he agreed to talk, we met in the waiting area of a nondescript restaurant, in a nondescript mall just outside one of America’s largest cities. A handsome man, David resembles a certain edgy movie star whose name, he fears, might identify him to his co-workers. He’s kept his secret well hidden: I am only the second individual whom he has confided to in person about his leg.

The cheerful guitar music in the restaurant lobby clashed with David’s mood. He choked up as he recounted his depression. I’d heard his voice cracking when we’d spoken earlier on the phone, but watching this grown man so full of emotion was difficult. The restaurant’s buzzer went off. Our table inside was ready, but David didn’t want to go in. Even though his voice was shaking, he wanted to keep talking.

“It got to the point where I’d come into my house and just cry,” he had told me earlier over the phone. “I’d be looking at other people and seeing that they already have their lives going good for them. And I’m stuck here, all miserable. I’m being held back by this strange obsession. The logic going through my head was that I need to take care of this now, because if I wait any longer, there is not much chance of a life for me.”

It took some time for David to open up. Early on, when we were just getting to know each other, he was shy and polite, confessing that he wasn’t very good at talking about himself. He had avoided seeking professional psychiatric help, afraid that doing so would somehow endanger his employment. And yet he knew that he was slipping into a dark place. He began associating his house with the feeling of being alone and depressed. Soon he came home only to sleep; he couldn’t be in the house during the day without breaking into tears.

One night about a year ago, when he could bear it no longer, David called his best friend. There was something he had been wanting to reveal his whole life, David told him. His friend’s response was empathetic — exactly what David needed. Even as David was speaking he began searching online for material. “He told me that there was something in my eyes the whole time I was growing up,” David said. “It looked like I had pain in my eyes, like there was something I wasn’t telling him.” Once David opened up, he discovered that he was not alone. He found a community on the internet of others who were also desperate to excise some part of their body — usually a limb, sometimes two. These people were suffering from what is now called Body Integrity Identity Disorder (BIID).

The online community has been a blessing to those who suffer from BIID, and through it many discover that their malaise has an official name. With a handful of websites and a few thousand members, the community even has its internal subdivisions: “devotees” are fascinated by or attracted to amputees, often sexually, but don’t want amputations themselves; “wannabes” strongly desire an amputation of their own. A further delineation, “need-to-be,” describes someone whose desire for amputation is particularly fierce.

It was a wannabe who told David about a former BIID patient who had been connecting other sufferers to a surgeon in Asia. For a fee, this doctor would perform off-the-book amputations. David contacted this gatekeeper on Facebook, but more than a month passed without a reply. As his hopes of surgery began to fade, David’s depression deepened. The leg intruded more insistently into his thoughts. He decided to try again to get rid of it himself.

This time he settled for dry ice, one of the preferred methods of self-amputation among the BIID community. The idea is to freeze the offending limb and damage it to the point that doctors have no choice but to amputate. David drove over to his local Walmart and bought two large trashcans. The plan was brutal, but simple. First, he would submerge the leg in a can full of cold water to numb it. Then he would pack it in a can full of dry ice until it was injured beyond repair.

He bought rolls of bandages, but he couldn’t find the dry ice or the prescription painkillers he needed if he was going to keep the leg in dry ice for eight hours. David went home despondent, with just two trashcans and bandages, preparing himself mentally to go out the next day to find the other ingredients. The painkillers were essential; he knew that without them he would never succeed. Then, before going to bed that night, he checked his computer.

There it was: a message. The gatekeeper wanted to talk.

We are only just beginning to understand BIID. It hasn’t helped that the medical establishment has generally dismissed the condition as a perversion. Yet there is evidence that it has existed for hundreds of years. In a recent paper, Peter Brugger, the head of neuropsychology at University Hospital Zurich, Switzerland, cites the case of an Englishman who went to France in the late 18th century and asked a surgeon to amputate his leg. When the surgeon refused, the Englishman held him up at gunpoint, forcing him to perform the operation. After returning home, he sent the surgeon 250 guineas and a letter of thanks, in which he wrote that his leg had been “an invincible obstacle” to his happiness.

The first modern account of the condition dates from 1977, whenThe Journal of Sex Research published a paper on “apotemnophilia” — the desire to be an amputee. The paper categorised the desire for amputation as a paraphilia, a catchall term used for deviant sexual desires. Although it’s true that most people who desire such amputations are sexually attracted to amputees, the term paraphilia has long been a convenient label for misunderstandings: after all, at one time homosexuality was also labelled as paraphilia.

One of the co-authors of the 1977 paper was Gregg Furth, who eventually became a practising psychologist in New York. Furth himself suffered from the condition and, over time, became a major figure in the BIID underground. He wanted to help people deal with their problem, but medical treatment was always controversial — often for good reason. In 1998, Furth introduced a friend to an unlicensed surgeon who agreed to amputate the friend’s leg in a Tijuana clinic. The patient died of gangrene and the surgeon was sent to prison. A Scottish surgeon named Robert Smith, who practised at the Falkirk and District Royal Infirmary, briefly held out legal hope for BIID sufferers by openly performing voluntary amputations, but a media frenzy in 2000 led British authorities to forbid such procedures. The Smith affair fuelled a series of articles about the condition — some suggesting that merely identifying and defining such a condition could cause it to spread, like a virus.

Undeterred, Furth found a surgeon in Asia who was willing to perform amputations for about $6,000. But instead of getting the surgery himself, he began acting as a go-between, putting sufferers in touch with the surgeon.

He also contacted Michael First, a clinical psychiatrist at Columbia University in New York. Intrigued, First embarked on a survey of 52 patients. What he found was illuminating. The patients all seemed to be obsessed by the thought of a body that was different in some way from the one they possessed. There seemed to be a mismatch between their internal sense of their own bodies and their physical bodies. First, who would later lobby to have BIID more widely recognised, became convinced that he was looking at a disorder of identity, of the sense of self.

“The name that was originally proposed, apotemnophilia, was clearly a problem,” he told me. “We wanted a word that was parallel to gender identity disorder. GID has built into the name a concept that there is a function called gender identity, which is your sense of being male or female, which has gone wrong. So, what would be a parallel notion? Body integrity identity disorder hypothesises that a normal function, which is your comfort in how your body fits together, has gone wrong.”

Link: Why an MRI costs $1,080 in the US & $280 in France

There is a simple reason health care in the United States costs more than it does anywhere else: The prices are higher.

That may sound obvious. But it is, in fact, key to understanding one of the most pressing problems facing our economy. In 2009, Americans spent $7,960 per person on health care. Our neighbors in Canada spent $4,808. The Germans spent $4,218. The French, $3,978. If we had the per-person costs of any of those countries, America’s deficits would vanish. Workers would have much more money in their pockets. Our economy would grow more quickly, as our exports would be more competitive.

There are many possible explanations for why Americans pay so much more. It could be that we’re sicker. Or that we go to the doctor more frequently. But health researchers have largely discarded these theories. As Gerard Anderson, Uwe Reinhardt, Peter Hussey and Varduhi Petrosyan put it in the title of their influential 2003 study on international health-care costs, “it’s the prices, stupid.”

As it’s difficult to get good data on prices, that paper blamed prices largely by eliminating the other possible culprits. They authors considered, for instance, the idea that Americans were simply using more health-care services, but on close inspection, found that Americans don’t see the doctor more often or stay longer in the hospital than residents of other countries. Quite the opposite, actually. We spend less time in the hospital than Germans and see the doctor less often than the Canadians.

“The United States spends more on health care than any of the other OECD countries spend, without providing more services than the other countries do,” they concluded. “This suggests that the difference in spending is mostly attributable to higher prices of goods and services.”

On Friday, the International Federation of Health Plans — a global insurance trade association that includes more than 100 insurers in 25 countries — released more direct evidence. It surveyed its members on the prices paid for 23 medical services and products in different countries, asking after everything from a routine doctor’s visit to a dose of Lipitor to coronary bypass surgery. And in 22 of 23 cases, Americans are paying higher prices than residents of other developed countries. Usually, we’re paying quite a bit more. The exception is cataract surgery, which appears to be costlier in Switzerland, though cheaper everywhere else.

Prices don’t explain all of the difference between America and other countries. But they do explain a big chunk of it. The question, of course, is why Americans pay such high prices — and why we haven’t done anything about it.

“Other countries negotiate very aggressively with the providers and set rates that are much lower than we do,” Anderson says. They do this in one of two ways. In countries such as Canada and Britain, prices are set by the government. In others, such as Germany and Japan, they’re set by providers and insurers sitting in a room and coming to an agreement, with the government stepping in to set prices if they fail.

Health care is an unusual product in that it is difficult, and sometimes impossible, for the customer to say “no.” In certain cases, the customer is passed out, or otherwise incapable of making decisions about her care, and the decisions are made by providers whose mandate is, correctly, to save lives rather than money.

In America, Medicare and Medicaid negotiate prices on behalf of their tens of millions of members and, not coincidentally, purchase care at a substantial markdown from the commercial average. But outside that, it’s a free-for-all. Providers largely charge what they can get away with, often offering different prices to different insurers, and an even higher price to the uninsured.

In other cases, there is more time for loved ones to consider costs, but little emotional space to do so — no one wants to think there was something more they could have done to save their parent or child. It is not like buying a television, where you can easily comparison shop and walk out of the store, and even forgo the purchase if it’s too expensive. And imagine what you would pay for a television if the salesmen at Best Buy knew that you couldn’t leave without making a purchase.

“In my view, health is a business in the United States in quite a different way than it is elsewhere,” says Tom Sackville, who served in Margaret Thatcher’s government and now directs the IFHP. “It’s very much something people make money out of. There isn’t too much embarrassment about that compared to Europe and elsewhere.”

The result is that, unlike in other countries, sellers of health-care services in America have considerable power to set prices, and so they set them quite high. Two of the five most profitable industries in the United States — the pharmaceuticals industry and the medical device industry — sell health care. With margins of almost 20 percent, they beat out even the financial sector for sheer profitability.

Link: Scott and Scurvy

How the cure for scurvy, discovered in 1747, had been forgotten by the time of Scott’s expedition to the Antarctic in 1911.

Recently I have been re-reading one of my favorite books, The Worst Journey in the World, an account of Robert Falcon Scott’s 1911 expedition to the South Pole. I can’t do the book justice in a summary, other than recommend that you drop everything and read it, but there is one detail that particularly baffled me the first time through, and that I resolved to understand better once I could stand to put the book down long enough.

Writing about the first winter the men spent on the ice, Cherry-Garrard casually mentions an astonishing lecture on scurvy by one of the expedition’s doctors:

Atkinson inclined to Almroth Wright’s theory that scurvy is due to an acid intoxication of the blood caused by bacteria…
There was little scurvy in Nelson’s days; but the reason is not clear, since, according to modern research, lime-juice only helps to prevent it. We had, at Cape Evans, a salt of sodium to be used to alkalize the blood as an experiment, if necessity arose. Darkness, cold, and hard work are in Atkinson’s opinion important causes of scurvy.

Now, I had been taught in school that scurvy had been conquered in 1747, when the Scottish physician James Lind proved in one of the first controlled medical experiments that citrus fruits were an effective cure for the disease. From that point on, we were told, the Royal Navy had required a daily dose of lime juice to be mixed in with sailors’grog, and scurvy ceased to be a problem on long ocean voyages.

But here was a Royal Navy surgeon in 1911 apparently ignorant of what caused the disease, or how to cure it. Somehow a highly-trained group of scientists at the start of the 20th century knew less about scurvy than the average sea captain in Napoleonic times. Scott left a base abundantly stocked with fresh meat, fruits, apples, and lime juice, and headed out on the ice for five months with no protection against scurvy, all the while confident he was not at risk. What happened?

By all accounts scurvy is a horrible disease. Scott, who has reason to know, gives a succinct description:

The symptoms of scurvy do not necessarily occur in a regular order, but generally the first sign is an inflamed, swollen condition of the gums. The whitish pink tinge next the teeth is replaced by an angry red; as the disease gains ground the gums become more spongy and turn to a purplish colour, the teeth become loose and the gums sore. Spots appear on the legs, and pain is felt in old wounds and bruises; later, from a slight oedema, the legs, and then the arms, swell to a great size and become blackened behind the joints. After this the patient is soon incapacitated, and the last horrible stages of the disease set in, from which death is a merciful release.

One of the most striking features of the disease is the disproportion between its severity and the simplicity of the cure. Today we know that scurvy is due solely to a deficiency in vitamin C, a compound essential to metabolism that the human body must obtain from food. Scurvy is rapidly and completely cured by restoring vitamin C into the diet.

Except for the nature of vitamin C, eighteenth century physicians knew this too. But in the second half of the nineteenth century, the cure for scurvy was lost. The story of how this happened is a striking demonstration of the problem of induction, and how progress in one field of study can lead to unintended steps backward in another.

An unfortunate series of accidents conspired with advances in technology to discredit the cure for scurvy. What had been a simple dietary deficiency became a subtle and unpredictable disease that could strike without warning. Over the course of fifty years, scurvy would return to torment not just Polar explorers, but thousands of infants born into wealthy European and American homes.

Link: How Doctors Die

It’s not a frequent topic of discussion, but doctors die, too. And they don’t die like the rest of us. What’s unusual about them is not how much treatment they get compared to most Americans, but how little. For all the time they spend fending off the deaths of others, they tend to be fairly serene when faced with death themselves. They know exactly what is going to happen, they know the choices, and they generally have access to any sort of medical care they could want. But they go gently.

Of course, doctors don’t want to die; they want to live. But they know enough about modern medicine to know its limits. And they know enough about death to know what all people fear most: dying in pain, and dying alone. They’ve talked about this with their families. They want to be sure, when the time comes, that no heroic measures will happen–that they will never experience, during their last moments on earth, someone breaking their ribs in an attempt to resuscitate them with CPR (that’s what happens if CPR is done right).

Almost all medical professionals have seen what we call “futile care” being performed on people. That’s when doctors bring the cutting edge of technology to bear on a grievously ill person near the end of life. The patient will get cut open, perforated with tubes, hooked up to machines, and assaulted with drugs. All of this occurs in the Intensive Care Unit at a cost of tens of thousands of dollars a day. What it buys is misery we would not inflict on a terrorist. I cannot count the number of times fellow physicians have told me, in words that vary only slightly, “Promise me if you find me like this that you’ll kill me.” They mean it. Some medical personnel wear medallions stamped “NO CODE” to tell physicians not to perform CPR on them. I have even seen it as a tattoo.

To administer medical care that makes people suffer is anguishing. Physicians are trained to gather information without revealing any of their own feelings, but in private, among fellow doctors, they’ll vent. “How can anyone do that to their family members?” they’ll ask. I suspect it’s one reason physicians have higher rates of alcohol abuse and depression than professionals in most other fields. I know it’s one reason I stopped participating in hospital care for the last 10 years of my practice.

How has it come to this–that doctors administer so much care that they wouldn’t want for themselves? The simple, or not-so-simple, answer is this: patients, doctors, and the system.

(Source: sunrec)



Awakening
Since its introduction in 1846, anesthesia has allowed for medical miracles. Limbs can be removed, tumors examined, organs replaced—and a patient will feel and remember nothing. Or so we choose to believe. In reality, tens of thousands of patients each year in the United States alone wake up at some point during surgery. Since their eyes are taped shut and their bodies are usually paralyzed, they cannot alert anyone to their condition. In efforts to eradicate this phenomenon, medicine has been forced to confront how little we really know about anesthesia’s effects on the brain. The doctor who may be closest to a solution may also answer a question that has confounded centuries’ worth of scientists and philosophers: What does it mean to be conscious?
… This experience is called “intraoperative recall” or “anesthesia awareness,” and it’s more common than you might think. Although studies diverge, most experts estimate that for every 1,000 patients who undergo general anesthesia each year in the United States, one to two will experience awareness. Patients who awake hear surgeons’ small talk, the swish and stretch of organs, the suctioning of blood; they feel the probing of fingers, the yanks and tugs on innards; they smell cauterized flesh and singed hair. But because one of the first steps of surgery is to tape patients’ eyes shut, they can’t see. And because another common step is to paralyze patients to prevent muscle twitching, they have no way to alert doctors that they are awake.
Many of these cases are benign: vague, hazy flashbacks. But up to 70 percent of patients who experience awareness suffer long-term psychological distress, including PTSD—a rate five times higher than that of soldiers returning from Iraq and Afghanistan. Campbell now understands that this is what happened to her, although she didn’t believe it at first. “The whole idea of anesthesia awareness seemed over-the-top,” she told me. “It took years to begin to say, ‘I think this is what happened to me.’ ” She describes her memories of the surgery like those from a car accident: the moments before and after are clear, but the actual event is a shadowy blur of emotion. She searched online for people with similar experiences, found a coalition of victims, and eventually traveled up the East Coast to speak with some of them. They all shared a constellation of symptoms: nightmares, fear of confinement, the inability to lie flat (many sleep in chairs), and a sense of having died and returned to life. Campbell (whose name and certain other identifying details have been changed) struggles especially with the knowledge that there is no way for her to prove that she woke up, and that many, if not most, people might not believe her. “Anesthesia awareness is an intrapersonal event,” she says. “No one else sees it. No one else knows it. You’re the only one.”
Sizemore complained of being unable to breathe and claimed that people were trying to bury him alive. He suffered from insomnia; when he could sleep, he had vivid nightmares.
In most cases of awareness, patients are awake but still dulled to pain. But that was not the case for Sherman Sizemore Jr., a Baptist minister and former coal miner who was 73 when he underwent an exploratory laparotomy in early 2006 to pinpoint the cause of recurring abdominal pain. In this type of procedure, surgeons methodically explore a patient’s viscera for evidence of abnormalities. Although there are no official accounts of Sizemore’s experience, his family maintained in a lawsuit that he was awake—and feeling pain—throughout the surgery. (The suit was settled in 2008.) He reportedly emerged from the operation behaving strangely. He was afraid to be left alone. He complained of being unable to breathe and claimed that people were trying to bury him alive. He refused to be around his grandchildren. He suffered from insomnia; when he could sleep, he had vivid nightmares.
The lawsuit claimed that Sizemore was tormented by doubt, wondering whether he had imagined the horrific pain. No one advised Sizemore to seek psychiatric help, his family alleged, and no one mentioned the fact that many patients who experience awareness suffer from PTSD. On February 2, 2006, two weeks after his surgery, Sizemore shot himself. He had no history of psychiatric illness.

Awakening

Since its introduction in 1846, anesthesia has allowed for medical miracles. Limbs can be removed, tumors examined, organs replaced—and a patient will feel and remember nothing. Or so we choose to believe. In reality, tens of thousands of patients each year in the United States alone wake up at some point during surgery. Since their eyes are taped shut and their bodies are usually paralyzed, they cannot alert anyone to their condition. In efforts to eradicate this phenomenon, medicine has been forced to confront how little we really know about anesthesia’s effects on the brain. The doctor who may be closest to a solution may also answer a question that has confounded centuries’ worth of scientists and philosophers: What does it mean to be conscious?

… This experience is called “intraoperative recall” or “anesthesia awareness,” and it’s more common than you might think. Although studies diverge, most experts estimate that for every 1,000 patients who undergo general anesthesia each year in the United States, one to two will experience awareness. Patients who awake hear surgeons’ small talk, the swish and stretch of organs, the suctioning of blood; they feel the probing of fingers, the yanks and tugs on innards; they smell cauterized flesh and singed hair. But because one of the first steps of surgery is to tape patients’ eyes shut, they can’t see. And because another common step is to paralyze patients to prevent muscle twitching, they have no way to alert doctors that they are awake.

Many of these cases are benign: vague, hazy flashbacks. But up to 70 percent of patients who experience awareness suffer long-term psychological distress, including PTSD—a rate five times higher than that of soldiers returning from Iraq and Afghanistan. Campbell now understands that this is what happened to her, although she didn’t believe it at first. “The whole idea of anesthesia awareness seemed over-the-top,” she told me. “It took years to begin to say, ‘I think this is what happened to me.’ ” She describes her memories of the surgery like those from a car accident: the moments before and after are clear, but the actual event is a shadowy blur of emotion. She searched online for people with similar experiences, found a coalition of victims, and eventually traveled up the East Coast to speak with some of them. They all shared a constellation of symptoms: nightmares, fear of confinement, the inability to lie flat (many sleep in chairs), and a sense of having died and returned to life. Campbell (whose name and certain other identifying details have been changed) struggles especially with the knowledge that there is no way for her to prove that she woke up, and that many, if not most, people might not believe her. “Anesthesia awareness is an intrapersonal event,” she says. “No one else sees it. No one else knows it. You’re the only one.”

Sizemore complained of being unable to breathe and claimed that people were trying to bury him alive. He suffered from insomnia; when he could sleep, he had vivid nightmares.

In most cases of awareness, patients are awake but still dulled to pain. But that was not the case for Sherman Sizemore Jr., a Baptist minister and former coal miner who was 73 when he underwent an exploratory laparotomy in early 2006 to pinpoint the cause of recurring abdominal pain. In this type of procedure, surgeons methodically explore a patient’s viscera for evidence of abnormalities. Although there are no official accounts of Sizemore’s experience, his family maintained in a lawsuit that he was awake—and feeling pain—throughout the surgery. (The suit was settled in 2008.) He reportedly emerged from the operation behaving strangely. He was afraid to be left alone. He complained of being unable to breathe and claimed that people were trying to bury him alive. He refused to be around his grandchildren. He suffered from insomnia; when he could sleep, he had vivid nightmares.

The lawsuit claimed that Sizemore was tormented by doubt, wondering whether he had imagined the horrific pain. No one advised Sizemore to seek psychiatric help, his family alleged, and no one mentioned the fact that many patients who experience awareness suffer from PTSD. On February 2, 2006, two weeks after his surgery, Sizemore shot himself. He had no history of psychiatric illness.