Sunshine Recorder

Link: Technology and Consumership

Today’s media, combined with the latest portable devices, have pushed serious public discourse into the background and hauled triviality to the fore, according to media theorist Arthur W Hunt. And the Jeffersonian notion of citizenship has given way to modern consumership.

Almantas Samalavicius: In your recently published book Surviving Technopolis, you discuss a number of important and overlapping issues that threaten the future of societies. One of the central themes you explore is the rise, dominance and consequences of visual imagery in public discourse, which you say undermines a more literate culture of the past. This tendency has been outlined and questioned by a large and growing number of social thinkers (Marshall McLuhan, Walter Ong, Jacques Ellul, Ivan Illich, Neil Postman and others). What do you see as most culturally threatening in this shift to visual imagery?

Arthur W. Hunt III: The shift is technological and moral. The two are related, as Ellul has pointed out. Computer-based digital images stem from an evolution of other technologies beginning with telegraphy and photography, both appearing in the middle of the nineteenth century. Telegraphy trivialized information by allowing it to come to us from anywhere and in greater volumes. Photography de-contextualized information by giving us an abundance of pictures disassociated from the objects from which they came. Cinema magnified Aristotle’s notion of spectacle, which he claimed to be the least artistic element in Poetics. Spectacle in modern film tends to diminish all other elements of drama (plot, character, dialogue and so on) in favour of the exploding Capitol building. Radio put the voice of both the President and the Lone Ranger into our living rooms. Television was the natural and powerful usurper of radio and quickly became the nucleus of the home, a station occupied by the hearth for thousands of years. Then the television split in two, three or four ways so that every house member had a set in his or her bedroom. What followed was the personal computer at both home and at work. Today we have portable computers in which we watch shows, play games, email each other and gaze at ourselves like we used to look at Hollywood stars. To a large extent, these technologies are simply extensions of our technological society. They act as Sirens of distraction. They push serious public discourse into the background and pull triviality to the foreground. They move us away from the Jeffersonian notion of citizenship, replacing it with modern capitalism’s ethic of materialistic desire or “consumership”. The great danger of all this, of course, is that we neglect the polis and, instead, waste our time with bread and circuses. Accompanying this neglect is the creation of people who spend years in school yet remain illiterate, at least by the standards we used to hold out for a literate person. The trivialization spreads out into other institutions, as Postman has argued, to schools, churches and politics. This may be an American phenomenon, but many countries look to America’s institutions for guidance.

AS: Philosopher and historian Ivan Illich – one of the most radical critics of modernity and its mythology – has emphasized the conceptual difference between tools, on one hand, and technology on the other, implying that the dominance and overuse of technology is socially and culturally debilitating. Economist E.F. Schumacher urged us to rediscover the beauty of smallness and the use of more humane, “intermediate technologies”. However, a chorus of voices seems to sink in the ocean of popular technological optimism and a stubborn self-generating belief in the power of progress. Your critique contains no call to go back to the Middle Ages. Nor do you suggest that we give anything away to technological advances. Rather, you offer a sound and balanced argument about the misuses of technology and the mindscape that sacrifices tradition and human relationships on the altar of progress. Do you see any possibility of developing a more balanced approach to the role of technology in our culture? Obviously, many are aware, even if cynically, that technological progress has its downsides, but what of its upsides?

AWH: Short of a nuclear holocaust, we will not be going back to the Middle Ages any time soon. Electricity and automobiles are here to stay. The idea is not to be anti-technology. Neil Postman once said to be anti-technology is like being anti-food. Technologies are extensions of our bodies, and therefore scale, ecological impact and human flourishing becomes the yardstick for technological wisdom. The conventional wisdom of modern progress favours bigger, faster, newer and more. Large corporations see their purpose on earth to maximize profits. Their goal is to get us addicted to their addictions. We can no longer afford this kind of wisdom, which is not wisdom at all, but foolishness. We need to bolster a conversation about the human benefits of smaller, slower, older and less. Europeans often understand this better than Americans, that is, they are more conscious of preserving living spaces that are functional, aesthetically pleasing and that foster human interaction. E.F. Schumacher gave us some useful phraseology to promote an economy of human scale: “small is beautiful,” “technologies with a human face” and “homecomers.” He pointed out that “labour-saving machinery” is a paradoxical term, not only because it makes us unemployed, but also because it diminishes the value of work. Our goal should be to move toward a “third-way” economic model, one of self-sufficient regions, local economies of scale, thriving community life, cooperatives, family owned farms and shops, economic integration between the countryside and the nearby city, and a general revival of craftsmanship. Green technologies – solar and wind power for example – actually can help us achieve this third way, which is actually a kind of micro-capitalism.

AS: Technologies developed by humans (e.g. television) continue to shape and sustain a culture of consumerism, which has now become a global phenomenon. As you insightfully observe in one of your essays, McLuhan, who was often misinterpreted and misunderstood as a social theorist hailed by the television media he explored in a great depth, was fully aware of its ill effects on the human personality and he therefore limited his children’s TV viewing. Jerry Mander has argued for the elimination of television altogether, nevertheless, this medium is alive and kicking and continues to promote an ideology of consumption and, what is perhaps most alarming, successfully conditioning children to become voracious consumers in a society where the roles of parents become more and more institutionally limited. Do you have any hopes for this situation? Can one expect that people will develop a more critical attitude toward these instruments, which shape them as consumers? Does social criticism of these trends play any role in an environment where the media and the virtual worlds of the entertainment industry have become so powerful?

AWH: Modern habits of consumption have created what Benjamin Barber calls an “ethos of infantilization”, where children are psychologically manipulated into early adulthood and adults are conditioned to remain in a perpetual state of adolescence. Postman suggested essentially the same thing when he wroteThe Disappearance of Childhood. There have been many books written that address the problems of electronic media in stunting a child’s mental, physical and spiritual development. One of the better recent ones is Richard Louv’s Last Child in the Woods. Another one is Anthony Esolen’s Ten Ways to Destroy the Imagination of Your Child. We have plenty of books, but we don’t have enough people reading them or putting them into practice. Raising a child today is a daunting business, and maybe this is why more people are refusing to do it. No wonder John Bakan, a law professor at the University of British Columbia, wrote a New York Times op-ed complaining, “There is reason to believe that childhood itself is now in crisis.” The other day I was listening to the American television program 60 Minutes. The reporter was interviewing the Australian actress Cate Blanchett. I almost fell out of my chair when she starkly told the reporter, “We don’t outsource our children.” What she meant was, she does not let someone else raise her children. I think she was on to something. In most families today, both parents work outside the home. This is a fairly recent development if you consider the entire span of human history. Industrialism brought an end to the family as an economic unit. First, the father went off to work in the factory. Then, the mother entered the workforce during the last century. Well, the children could not stay home alone, so they were outsourced to various surrogate institutions. What was once provided by the home economy (oikos) – education, heath care, child rearing and care of the elderly – came to be provided by the state. The rest of our needs – food, clothing, shelter and entertainment – came to be provided by the corporations. A third-way economic ordering would seek to revive the old notion of oikos so that the home can once again be a legitimate economic, educational and care-providing unit – not just a place to watch TV and sleep. In other words, the home would once again become a centre for production, not just consumption. If this every happened, one or both parents would be at home and little Johnny and sister Jane would work and play alongside their parents.

AS: I was intrigued by your insight into forms of totalitarianism depicted by George Orwell and Aldous Huxley. Though most authors who discussed totalitarianism during the last half of the century were overtaken by the Orwellian vision and praised this as most enlightening, the alternative Huxleyan vision of a self-inflicted, joyful and entertaining totalitarian society was far less scrutinized. Do you think we are entering into a culture where “totalitarianism with a happy face” as you call it prevails? If so, what consequences you foresee?

AWH: It is interesting to note that Orwell thought Huxley’s Brave New Worldwas implausible because he maintained that hedonistic societies do not last long, and that they are too boring. However, both authors were addressing what many other intellectuals were debating during the 1930s: what would be the social implications of Darwin and Freud? What ideology would eclipse Christianity? Would the new social sciences be embraced with as much exuberance as the hard sciences? What would happen if managerial science were infused into all aspects of life? What should we make of wartime propaganda? What would be the long-term effects of modern advertising? What would happen to the traditional family? How could class divisions be resolved? How would new technologies shape the future?

I happen to believe there are actually more similarities between the Orwell’s 1984 and Huxley’s Brave New World than there are differences. Both novels have as their backstory the dilemma of living with weapons of mass destruction. The novel 1984 imagines what would happen if Hitler succeeded. In Brave New World, the world is at a crossroads. What is it to be, the annihilation of the human race or world peace through sociological control? In the end, the world chooses a highly efficient authoritarian state, which keeps the masses pacified by maintaining a culture of consumption and pleasure. In both novels, the past is wiped away from public memory. In Orwell’s novel, whoever “controls the past controls the future.” In Huxley’s novel, the past has been declared barbaric. All books published before A.F. 150 (that is, 150 years after 1908 CE, the year the first Model T rolled off the assembly line) are suppressed. Mustapha Mond, the Resident Controller in Brave New World, declares the wisdom of Ford: “History is bunk.” In both novels, the traditional family has been radically altered. Orwell draws from Hitler Youth and the Soviets Young Pioneers to give us a society where the child’s loyalty to the state far outweighs any loyalty to parents. Huxley gives us a novel where the biological family does not even exist. Any familial affection is looked down upon. Everybody belongs to everybody, sexually and otherwise. Both novels give us worlds where rational thought is suppressed so that “war is peace”, “freedom is slavery” and “ignorance is strength” (1984). InBrave New World, when Lenina is challenged by Marx to think for herself, all she can say is “I don’t understand.” The heroes in both novels are malcontents who want to escape this irrationality but end up excluded from society as misfits. Both novels perceive humans as religious beings where the state recognizes this truth but channels these inclinations toward patriotic devotion. In1984, Big Brother is worshipped. In Brave New World, the Christian cross has been cut off at the top to form the letter “T” for Technology. When engaged in the Orgy-Porgy, everyone in the room chants, “Ford, Ford, Ford.” In both novels an elite ruling class controls the populace by means of sophisticated technologies. Both novels show us surveillance states where the people are constantly monitored. Sound familiar? Certainly, as Postman tells us in his foreword to Amusing Ourselves to Death, Huxley’s vision eerily captures our culture of consumption. But how long would it take for a society to move from a happy faced totalitarianism to one that has a mask of tragedy?

AS: Your comments on the necessity of the third way in our societies subjected to and affected by economic globalization seem to resonate with the ideas of many social thinkers I interviewed for this series. Many outstanding social critics and thinkers seem to agree that the notions of communism and capitalism have become stale and meaningless; further development of these paradigms lead us nowhere. One of your essays focuses on the old concept of “shire” and household economics. Do you believe in what Mumford called “the useful past”? And do you expect the growing movement that might be referred to as “new economics” to enter the mainstream of our economic thinking, eventually leading to changes in our social habits?

AWH: If the third way economic model ever took hold, I suppose it could happen in several ways. We will start with the most desirable way, and then move to less desirable. The most peaceful way for this to happen is for people to come to some kind of realization that the global economy is not benefiting them and start desiring something else. People will see that their personal wages have been stagnant for too long, that they are working too hard with nothing to show for it, that something has to be done about the black hole of debt, and that they feel like pawns in an incomprehensible game of chess. Politicians will hear their cries and institute policies that would allow for local economies, communities and families to flourish. This scenario is less likely to happen, because the multinationals that help fund the campaigns of politicians will not allow it. I am primarily thinking of the American reality in my claim here. Unless corporations have a change of mind, something akin to a religious conversion, we will not see them open their hearts and give away their power.

A more likely scenario is that a grassroots movement led by creative innovators begins to experiment with new forms of community that serve to repair the moral and aesthetic imagination distorted by modern society. Philosopher Alasdair MacIntyre calls this the “Benedict Option” in his book After Virtue. Morris Berman’s The Twilight of American Culture essentially calls for the same solution. Inspired by the monasteries that preserved western culture in Europe during the Dark Ages, these communities would serve as models for others who are dissatisfied with the broken dreams associated with modern life. These would not be utopian communities, but humble efforts of trial and error, and hopefully diverse according to the outlook of those who live in them. The last scenario would be to have some great crisis occur – political, economic, or natural in origin – that would thrust upon us the necessity reordering our institutions. My father, who is in his nineties, often reminisces to me about the Great Depression. Although it was a miserable time, he speaks of it as the happiest time in his life. His best stories are about neighbours who loved and cared for each other, garden plots and favourite fishing holes. For any third way to work, a memory of the past will become very useful even if it sounds like literature. From a practical point of view, however, the kinds of knowledge that we will have to remember will include how to build a solid house, how to plant a vegetable garden, how to butcher a hog and how to craft a piece of furniture. In rural Tennessee where I live, there are people still around who know how to do these things, but they are a dying breed.

AS: The long (almost half-century) period of the Cold War has resulted in many social effects. The horrors of Communist regimes and the futility of state-planned economics, as well as the treason of western intellectuals who remained blind to the practice of Communist powers and eschewed ideas of idealized Communism, have aided the ideology of capitalism and consumerism. Capitalism came to be associated with ideas of freedom, free enterprise, freedom to choose and so on. How is this legacy burdening us in the current climate of economic globalization? Do you think recent crises and new social movements have the potential to shape a more critical view (and revision) of capitalism and especially its most ugly neo-liberal shape?

AWH: Here in America liberals want to hold on to their utopian visions of progress amidst the growing evidence that global capitalism is not delivering on its promises. Conservatives are very reluctant to criticize the downsides of capitalism, yet they are not really that different in their own visions of progress in comparison to liberals. It was amusing to hear the American politician Sarah Palin describe Pope Francis’ recent declarations against the “globalization of indifference” as being “a little liberal.” The Pope is liberal? While Democrats look to big government to save them, Republicans look to big business. Don’t they realize that with modern capitalism, big government and big business are joined at the hip? The British historian Hilarie Belloc recognized this over a century ago, when he wrote about the “servile state,” a condition where an unfree majority of non-owners work for the pleasure of a free minority of owners. But getting to your question, I do think more people are beginning to wake up to the problems associated with modern consumerist capitalism. A good example of this is a recent critique of capitalism written by Daniel M. Bell, Jr. entitled The Economy of Desire: Christianity and Capitalism in a Postmodern World. Here is a religious conservative who is saying the great tempter of our age is none other than Walmart. The absurdist philosopher and Nobel Prize winner Albert Camus once said the real passion of the twentieth century was not freedom, but servitude. Jacques Ellul, Camus’s contemporary, would have agreed with that assessment. Both believed that the United States and the Soviet Union, despite their Cold War differences, had one thing in common – the two powers had surrendered to the sovereignty of technology. Camus’ absurdism took a hard turn toward nihilism, while Ellul turned out to be a kind of cultural Jeremiah. It is interesting to me that when I talk to some people about third way ideas, which actually is an old way of thinking about economy, they tell me it can’t be done, that we are now beyond all that, and that the our economic trajectory is unstoppable or inevitable. This retort, I think, reveals how little freedom our system possesses. So, I can’t have a family farm? My small business can’t compete with the big guys? My wife has to work outside the home and I have to outsource the raising of my children? Who would have thought capitalism would lack this much freedom?

AS: And finally are you an optimist? Jacques Ellul seems to have been very pessimistic about us escaping from the iron cage of technological society. Do you think we can still break free?

AWH: I am both optimistic and pessimistic. In America, our rural areas are becoming increasingly depopulated. I see this as an opportunity for resettling the land – those large swaths of fields and forests that encompass about three quarters of our landmass. That is a very nice drawing board if we can figure out how to get back to it. I am also optimistic about the fact that more people are waking up to our troubling times. Other American writers that I would classify as third way proponents include Wendell Berry, Kirkpatrick Sale, Rod Dreher, Mark T. Mitchell, Bill Kauffman, Joseph Pearce and Allan Carlson. There is also a current within the American and British literary tradition, which has served as a critique of modernity. G.K. Chesterton, J.R.R. Tolkien, Dorothy Day and Allen Tate represent this sensibility, which is really a Catholic sensibility, although one does not have to be Catholic to have it. I am amazed at the popularity of novels about Amish people among American evangelical women. Even my wife reads them, and we are Presbyterians! In this country, the local food movement, the homeschool movement and the simplicity movement all seem to be pointing toward a kind of breaking away. You do not have to be Amish to break away from the cage of technological society; you only have to be deliberate and courageous. If we ever break out of the cage in the West, there will be two types of people who will lead such a movement. The first are religious people, both Catholic and Protestant, who will want to create a counter-environment for themselves and their children. The second are the old-school humanists, people who have a sense of history, an appreciation of the cultural achievements of the past, and the ability to see what is coming down the road. If Christians and humanists do nothing, and let modernity roll over them, I am afraid we face what C.S. Lewis called “the abolition of man”. Lewis believed our greatest danger was to have a technological elite – what he called The Conditioners – exert power over the vast majority so that our humanity is squeezed out of us. Of course all of this would be done in the name of progress, and most of us would willingly comply. The Conditioners are not acting on behalf of the public good or any other such ideal, rather what they want are guns, gold, and girls – power, profits and pleasure. The tragedy of all this, as Lewis pointed out, is that if they destroy us, they will destroy themselves, and in the end Nature will have the last laugh.

Link: Death Stares

By Facebook’s 10th anniversary in February 2014, the site claimed well over a billion active users. Embedded among those active accounts, however, are the profiles of the dead: nearly anyone with a Facebook account catches glimpses of digital ghosts, as dead friends’ accounts flicker past in the News Feed. As users of social media age, it is inevitable that interacting with the dead will become part of our everyday mediated encounters. Some estimates claim that 30 million Facebook profiles belong to dead users, at times making it hard to distinguish between the living and the dead online. While some profiles have been “memorialized,” meaning that they are essentially frozen in time and only searchable to Facebook friends, other accounts continue on as before.

In an infamous Canadian case, a young woman’s obituary photograph later appeared in a dating website’s advertising on Facebook. Her parents were rightly horrified by this breach of privacy, particularly because her suicide was prompted by cyberbullying following a gang rape. But digital images, once we put them out into the world on social networking platforms (or just on iPhones, as recent findings about the NSA make clear), are open to circulation, reproduction, and alteration. Digital images’ meanings can change just as easily as Snapchat photographs appear and fade. This seems less objectionable when the images being shared are of yesterday’s craft cocktail, but having images of funerals and corpses escape our control seems unpalatable.

While images of death and destruction routinely bombard us on 24-hour cable news networks, images of death may make us uncomfortable when they emerge from the private sphere, or are generated for semi-public viewing on social networking websites. As I check my Twitter feed while writing this essay, a gruesome image of a 3-year-old Palestinian girl murdered by Israeli troops has well over a thousand retweets, indicating that squeamishness about death does not extend to international news events. By contrast, when a mother of four posted photographs of her body post cancer treatments, mastectomy scars fully visible, she purportedly lost over one hundred of her Facebook friends who were put off by this display. To place carefully chosen images and text on a Facebook memorial page is one thing, but to post photographs of a deceased friend in her coffin or on her deathbed is quite another. For social media users accustomed to seeing stylized profiles, images of decay cut through the illusion of curation.

In a 2009 letter to the British Medical Journal a doctor commented on a man using a mobile phone to photograph a newly dead family member, pointing out with apparent distaste that Victorian postmortem portraits “were not candid shots of an unprepared still warm body.” He wonders, “Is the comparatively covert and instant nature of the mobile phone camera allowing people to respond to stress in a way that comforts them, but society may deem unacceptable and morbid?” While the horrified doctor saw a major discrepancy between Victorian postmortem photographs and the one his patient’s family member took, Victorian images were not always pristine. Signs of decay, illness, or struggle are visible in many of the photographs. Sickness or the act of dying, too, was depicted in these photos, not unlike the practices of deathbed tweeting and illness blogging. Even famous writersand artists were photographed on their deathbeds.

Photography has always been connected to death, both in theory and practice. For Roland Barthes, the photograph is That-has-been. To take a photo of oneself, to pose and press a button, is to declare one’s thereness while simultaneously hinting at your eventual death. The photograph is always “literally an emanation of the referent” and a process of mortification, of turning a subject into an object — a dead thing. Susan Sontag claimed that all photographs are memento mori, while Eduardo Cadava said that all photographs are farewells.

The perceived creepiness of postmortem photography has to do with the uncanniness of ambiguity: Is the photographed subject alive or dead? Painted eyes and artificially rosy cheeks, lifelike positions, and other additions made postmortem subjects seem more asleep than dead. Because of its ability to materialize and capture, photography both mortifies and reanimates its subjects. Not just photography, but other container technologies like phonographs and inscription tools can induce the same effects. Digital technology is another incarnation of these processes, as social networking profiles, email accounts, and blogs become new means of concretizing and preserving affective bonds. Online profiles and digital photographs share with postmortem photographs this uncanny quality of blurring the boundaries between life and death, animate and inanimate, or permanence and ephemerality.

Sharing postmortem photos or mourning selfies on social media platforms may seem creepy, but death photos were not always politely confined to such depersonalized sources as mass media. Postmortem and mourning photography were once accepted or even expected forms of bereavement, not callously dismissed as TMI. Victorians circulated images of dead loved ones on cabinet cards or cartes de visite, even if they could not reach as wide a public audience as those who now post on Instagram and Twitter. Photography historian Gregory Batchen notes that postmortem and mourning images were “displayed in parlors or living rooms or as part of everyday attire, these objects occupied a liminal space between public and private. They were, in other words, meant to do their work over and over again, and to be seen by both intimates and strangers.”

Victorian postmortem photography captured dead bodies in a variety of positions, including sleeping, sitting in a chair, lying in a coffin, or even standing with loved ones. Thousands of postmortem and mourning images from the 19th and early 20th centuries persist in archives and private collections, some of them bearing a striking resemblance to present day images. The Thanatos Archive in Woodinville, Washington, contains thousands of mourning and postmortem images from the 19th century. In one Civil War-era mourning photograph, a beautiful young women in white looks at the camera, not dissimilar to the images of the coiffed young women on Selfies at Funerals. In another image, a young woman in black holds a handkerchief to her face, an almost exaggerated gesture of mourning that the comically excessive pouting found in many funeral selfies recalls. In an earlier daguerreotype, a young woman in black holds two portraits of presumably deceased men.

Batchen describes Victorian mourners as people who “wanted to be remembered as remembering.” Many posed while holding photographs of dead loved ones or standing next to their coffins. Photographs from the 19th century feature women dressed in ornate mourning clothes, staring solemnly at photographs of dead loved ones. The photograph and braided ornaments made from hair of the deceased acted as metonymic devices, connecting the mourner in a physical way to the absent loved one, while ornate mourning wear, ritual, and the addition of paint or collage elements to mourning photographs left material traces of loss and remembrance.

Because photographs were time-consuming and expensive to produce in the Victorian era, middle-class families reserved portraits for special events. With the high rate of childhood mortality, families often had only one chance to photograph their children: as memento mori. Childhood mortality rates in the United States, while still higher than many other industrialized nations, are now significantly lower, meaning that images of dead children are startling. For those who do lose children today, however, the service Now I Lay Me Down to Sleep produces postmortem and deathbed photographs of terminally ill children.

Memorial photography is no mere morbid remnant of a Victorian past. Through his ethnographic fieldwork in rural Pennsylvania, anthropologist Jay Ruby uncovered a surprising amount of postmortem photography practices in the contemporary U.S. Because of the stigma associated with postmortem photography, however, most of his informants expressed their desire to keep such photographs private or even secret. Even if these practices continue, they have moved underground. Unlike the arduous photographic process of the 19th century, which could require living subjects to sit disciplined by metal rods to keep them from blurring in the finished image, smartphones and digital photography allow images to be taken quickly or even surreptitiously. Rather than calling on a professional photographer’s cumbersome equipment, grieving family members can use their own devices to secure the shadows of dead loved ones. While wearing jewelry made of human hair is less acceptable now (though people do make their loved ones into cremation diamonds), we may instead use digital avenues to leave material traces of mourning.

Why did these practices disappear from public view? In the 19th century, mourning and death were part of everyday life but by the middle of the 20th century, outward signs of grief were considered pathological and most middle-class Americans shied away from earlier practices, as numerous funeral industry experts and theorists have argued. Once families washed and prepared their loved ones’ bodies for burial; now care of the dead has been outsourced to corporatized funeral homes.

This is partly a result of attempts to deal with the catastrophic losses of the First and Second World Wars, when proper bereavement included separating oneself from the dead. Influenced by Freudian psychoanalysis’s categorization of grief as pathological, psychologists from the 1920s through the 1950s associated prolonged grief with mental instability, advising mourners to “get over” loss. Also, with the advent of antibiotics and vaccines for once common childhood killers like polio, the visibility of death in everyday life lessened. The changing economy and beginnings of post-Fordism contributed to these changes as well, as care work and other forms of affective labor moved from the domicile to commercial enterprises. Jessica Mitford’s influential 1963 book, The American Way of Death, traces the movement of death care from homes to local funeral parlors to national franchises, showing how funeral directors take advantage of grieving families by selling exorbitant coffins and other death accoutrements. Secularization is also a contributing factor, as elaborate death rituals faded from public life. While death and grief reentered the public discourse in the 1960s and 1970s, the medicalization of death and growth of nursing homes and hospice centers meant that many individuals only saw dead people as prepared and embalmed corpses at wakes and open casket funerals.

Despite this, reports of a general “death taboo” have been greatly exaggerated. Memorial traces are actually everywhere, prompting American Studies scholar Erika Doss to dub this the age of “memorial mania.” Various national traumas have led to numerous memorials, both online and physical, and likewise, on social media, including tactile examples like the AIDS memorial quilt, large physical structures like the 9/11 memorial, long-standing online entities like sitesremembering Columbine, and more recent localized memorials dedicated to the dead on social networking websites.

But these types of memorials did not immediately normalize washing, burying, or photographing the body of a loved one. There’s a disconnect between the shiny and seemingly disembodied memorials on social media platforms and the presence of the corpse, particularly one that has not been embalmed or prepared.

Some recent movements in the mortuary world call for acknowledgement of the body’s decay rather than relying on disembodied forms of memorialization and remembrance. Rather than outsourcing embalmment to a funeral home, proponents of green funerals from such organizations as the Order of the Good Death and the Death Salon call for direct engagement with the dead body, learning to care for and  even bury dead loved ones at home. The Order of the Good Death advises individuals to embrace death: “The Order is about making death a part of your life. That means committing to staring down your death fears — whether it be your own death, the death of those you love, the pain of dying, the afterlife (or lack thereof), grief, corpses, bodily decomposition, or all of the above. Accepting that death itself is natural, but the death anxiety and terror of modern culture are not.”

The practices having to do with “digital media” and death that some find unsettling — including placing QR codes on headstones, using social media websites as mourning platforms, snapping photos of dead relatives on smartphones, funeral selfies, and illness blogging or deathbed tweeting— may be seen as attempts to do just that, materializing death and mourning much like Victorian postmortem photography or mourning hair jewelry. Much has been made of the loss of indexicality with digital images, which replace this physical process of emanation with flattened information, but this development doesn’t obviate the relationship between photography and death. For those experiencing loss, the ability to materialize their mourning — even in digital forms — is comforting rather than macabre.

Link: Hell on Earth

At the University of Oxford, a team of scholars led by the philosopher Rebecca Roache has begun thinking about the ways futuristic technologies might transform punishment. In January, I spoke with Roache and her colleagues Anders Sandberg and Hannah Maslen about emotional enhancement, ‘supercrimes’, and the ethics of eternal damnation. What follows is a condensed and edited transcript of our conversation.

Suppose we develop the ability to radically expand the human lifespan, so that people are regularly living for more than 500 years. Would that allow judges to fit punishments to crimes more precisely?

Roache: When I began researching this topic, I was thinking a lot about Daniel Pelka, a four-year-old boy who was starved and beaten to death [in 2012] by his mother and stepfather here in the UK. I had wondered whether the best way to achieve justice in cases like that was to prolong death as long as possible. Some crimes are so bad they require a really long period of punishment, and a lot of people seem to get out of that punishment by dying. And so I thought, why not make prison sentences for particularly odious criminals worse by extending their lives?

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

The life-extension scenario may sound futuristic, but if you look closely you can already see it in action, as people begin to live longer lives than before. If you look at the enormous prison population in the US, you find an astronomical number of elderly prisoners, including quite a few with pacemakers. When I went digging around in medical journals, I found all these interesting papers about the treatment of pacemaker patients in prison.

Suppose prisons become more humane in the future, so that they resemble Norwegian prisons instead of those you see in America or North Korea. Is it possible that correctional facilities could become truly correctional in the age of long lifespans, by taking a more sustained approach to rehabilitation?

Roache: If people could live for centuries or millennia, you would obviously have more time to reform them, but you would also run into a tricky philosophical issue having to do with personal identity. A lot of philosophers who have written about personal identity wonder whether identity can be sustained over an extremely long lifespan. Even if your body makes it to 1,000 years, the thinking goes, that body is actually inhabited by a succession of persons over time rather than a single continuous person. And so, if you put someone in prison for a crime they committed at 40, they might, strictly speaking, be an entirely different person at 940. And that means you are effectively punishing one person for a crime committed by someone else. Most of us would think that unjust.

Let’s say that life expansion therapies become a normal part of the human condition, so that it’s not just elites who have access to them, it’s everyone. At what point would it become unethical to withhold these therapies from prisoners?

Roache: In that situation it would probably be inappropriate to view them as an enhancement, or something extra. If these therapies were truly universal, it’s more likely that people would come to think of them as life-saving technologies. And if you withheld them from prisoners in that scenario, you would effectively be denying them medical treatment, and today we consider that inhumane. My personal suspicion is that once life extension becomes more or less universal, people will begin to see it as a positive right, like health care in most industrialised nations today. Indeed, it’s interesting to note that in the US, prisoners sometimes receive better health care than uninsured people. You have to wonder about the incentives a system like that creates.

Where is that threshold of universality, where access to something becomes a positive right? Do we have an empirical example of it?

Roache: One interesting case might be internet access. In Finland, for instance, access to communication technology is considered a human right and handwritten letters are not sufficient to satisfy it. Finnish prisons are required to give inmates access to computers, although their internet activity is closely monitored. This is an interesting development because, for years, limiting access to computers was a common condition of probation in hacking cases – and that meant all kinds of computers, including ATMs [cash points]. In the 1980s, that lifestyle might have been possible, and you could also see pulling it off in the ’90s, though it would have been very difficult. But today computers are ubiquitous, and a normal life seems impossible without them; you can’t even access the subway without interacting with a computer of some sort.

In the late 1990s, an American hacker named Kevin Mitnick was denied all access to communication technology after law enforcement officials [in California] claimed he could ‘start a nuclear war by whistling into a pay phone’. But in the end, he got the ruling overturned by arguing that it prevented him from living a normal life.

What about life expansion that meddles with a person’s perception of time? Take someone convicted of a heinous crime, like the torture and murder of a child. Would it be unethical to tinker with the brain so that this person experiences a 1,000-year jail sentence in his or her mind?

Roache: There are a number of psychoactive drugs that distort people’s sense of time, so you could imagine developing a pill or a liquid that made someone feel like they were serving a 1,000-year sentence. Of course, there is a widely held view that any amount of tinkering with a person’s brain is unacceptably invasive. But you might not need to interfere with the brain directly. There is a long history of using the prison environment itself to affect prisoners’ subjective experience. During the Spanish Civil War [in the 1930s] there was actually a prison where modern art was used to make the environment aesthetically unpleasant. Also, prison cells themselves have been designed to make them more claustrophobic, and some prison beds are specifically made to be uncomfortable.

I haven’t found any specific cases of time dilation being used in prisons, but time distortion is a technique that is sometimes used in interrogation, where people are exposed to constant light, or unusual light fluctuations, so that they can’t tell what time of day it is. But in that case it’s not being used as a punishment, per se, it’s being used to break people’s sense of reality so that they become more dependent on the interrogator, and more pliable as a result. In that sense, a time-slowing pill would be a pretty radical innovation in the history of penal technology.

I want to ask you a question that has some crossover with theological debates about hell. Suppose we eventually learn to put off death indefinitely, and that we extend this treatment to prisoners. Is there any crime that would justify eternal imprisonment? Take Hitler as a test case. Say the Soviets had gotten to the bunker before he killed himself, and say capital punishment was out of the question – would we have put him behind bars forever?

Roache: It’s tough to say. If you start out with the premise that a punishment should be proportional to the crime, it’s difficult to think of a crime that could justify eternal imprisonment. You could imagine giving Hitler one term of life imprisonment for every person killed in the Second World War. That would make for quite a long sentence, but it would still be finite. The endangerment of mankind as a whole might qualify as a sufficiently serious crime to warrant it. As you know, a great deal of the research we do here at the Oxford Martin School concerns existential risk. Suppose there was some physics experiment that stood a decent chance of generating a black hole that could destroy the planet and all future generations. If someone deliberately set up an experiment like that, I could see that being the kind of supercrime that would justify an eternal sentence.

In your forthcoming paper on this subject, you mention the possibility that convicts with a neurologically stunted capacity for empathy might one day be ‘emotionally enhanced’, and that the remorse felt by these newly empathetic criminals could be the toughest form of punishment around. Do you think a full moral reckoning with an awful crime the most potent form of suffering an individual can endure?

Roache: I’m not sure. Obviously, it’s an empirical question as to which feels worse, genuine remorse or time in prison. There is certainly reason to take the claim seriously. For instance, in literature and folk wisdom, you often hear people saying things like, ‘The worst thing is I’ll have to live with myself.’ My own intuition is that for very serious crimes, genuine remorse could be subjectively worse than a prison sentence. But I doubt that’s the case for less serious crimes, where remorse isn’t even necessarily appropriate – like if you are wailing and beating yourself up for stealing a candy bar or something like that.

I remember watching a movie in school, about a teen that killed another teen in a drunk-driving accident. As one of the conditions of his probation, the judge in the case required him to mail a daily cheque for 25 cents to the parents of the teen he’d killed for a period of 10 years. Two years in, the teen was begging the judge to throw him in jail, just to avoid the daily reminder.

Roache: That’s an interesting case where prison is actually an escape from remorse, which is strange because one of the justifications for prison is that it’s supposed to focus your mind on what you have done wrong. Presumably, every day you wake up in prison, you ask yourself why you are there, right?

What if these emotional enhancements proved too effective? Suppose they are so powerful, they turn psychopaths into Zen masters who live in a constant state of deep, reflective contentment. Should that trouble us? Is mental suffering a necessary component of imprisonment?

Roache: There is a long-standing philosophical question as to how bad the prison experience should be. Retributivists, those who think the point of prisons is to punish, tend to think that it should be quite unpleasant, whereas consequentialists tend to be more concerned with a prison’s reformative effects, and its larger social costs. There are a number of prisons that offer prisoners constructive activities to participate in, including sports leagues, art classes, and even yoga. That practice seems to reflect the view that confinement, or the deprivation of liberty, is itself enough of a punishment. Of course, even for consequentialists, there has to be some level of suffering involved in punishment, because consequentialists are very concerned about deterrence.

I wanted to close by moving beyond imprisonment, to ask you about the future of punishment more broadly. Are there any alternative punishments that technology might enable, and that you can see on the horizon now? What surprising things might we see down the line?

Roache: We have been thinking a lot about surveillance and punishment lately. Already, we see governments using ankle bracelets to track people in various ways, and many of them are fairly elaborate. For instance, some of these devices allow you to commute to work, but they also give you a curfew and keep a close eye on your location. You can imagine this being refined further, so that your ankle bracelet bans you from entering establishments that sell alcohol. This could be used to punish people who happen to like going to pubs, or it could be used to reform severe alcoholics. Either way, technologies of this sort seem to be edging up to a level of behaviour control that makes some people uneasy, due to questions about personal autonomy.

It’s one thing to lose your personal liberty as a result of being confined in a prison, but you are still allowed to believe whatever you want while you are in there. In the UK, for instance, you cannot withhold religious manuscripts from a prisoner unless you have a very good reason. These concerns about autonomy become particularly potent when you start talking about brain implants that could potentially control behaviour directly. The classic example is Robert G Heath [a psychiatrist at Tulane University in New Orleans], who did this famously creepy experiment [in the 1950s] using electrodes in the brain in an attempt to modify behaviour in people who were prone to violent psychosis. The electrodes were ostensibly being used to treat the patients, but he was also, rather gleefully, trying to move them in a socially approved direction. You can really see that in his infamous [1972] paper on ‘curing’ homosexuals. I think most Western societies would say ‘no thanks’ to that kind of punishment.

To me, these questions about technology are interesting because they force us to rethink the truisms we currently hold about punishment. When we ask ourselves whether it’s inhumane to inflict a certain technology on someone, we have to make sure it’s not just the unfamiliarity that spooks us. And more importantly, we have to ask ourselves whether punishments like imprisonment are only considered humane because they are familiar, because we’ve all grown up in a world where imprisonment is what happens to people who commit crimes. Is it really OK to lock someone up for the best part of the only life they will ever have, or might it be more humane to tinker with their brains and set them free? When we ask that question, the goal isn’t simply to imagine a bunch of futuristic punishments – the goal is to look at today’s punishments through the lens of the future.

Link: Neil Postman: Informing Ourselves to Death

The following speech was given at a meeting of the German Informatics Society (Gesellschaft fuer Informatik) on October 11, 1990 in Stuttgart, Germany.

The great English playwright and social philosopher George Bernard Shaw once remarked that all professions are conspiracies against the common folk. He meant that those who belong to elite trades—physicians, lawyers, teachers, and scientists—protect their special status by creating vocabularies that are incomprehensible to the general public.  This process prevents outsiders from understanding what the profession is doing and why—and protects the insiders from close examination and criticism. Professions, in other words, build forbidding walls of technical gobbledegook over which the prying and alien eye cannot see.

Unlike George Bernard Shaw, I raise no complaint against this, for I consider myself a professional teacher and appreciate technical gobbledegook as much as anyone. But I do not object if occasionally someone who does not know the secrets of my trade is allowed entry to the inner halls to express an untutored point of view. Such a person may sometimes give a refreshing opinion or, even better, see something in a way that the professionals have overlooked.

I believe I have been invited to speak at this conference for justsuch a purpose. I do not know very much more about computer technology than the average person—which isn’t very much. I have little understanding of what excites a computer programmer or scientist, and in examining the descriptions of the presentations at this conference, I found each one more mysterious than the next. So, I clearly qualify as an outsider.

But I think that what you want here is not merely an outsider but an outsider who has a point of view that might be useful to the insiders. And that is why I accepted the invitation to speak. I believe I know something about what technologies do to culture, and I know even more about what technologies undo in a culture. In fact, I might say, at the start, that what a technology undoes is a subject that computer experts apparently know very little about. I have heard many experts in computer technology speak about the advantages that computers will bring. With one exception - namely, Joseph Weizenbaum—I have never heard anyone speak seriously and comprehensively about the disadvantages of computer technology, which strikes me as odd, and makes me wonder if the profession is hiding something important. That is to say, what seems to be lacking among computer experts is a sense of technological modesty.

After all, anyone who has studied the history of technology knows that technological change is always a Faustian bargain: Technology giveth and technology taketh away, and not always in equal measure. A new technology sometimes creates more than it destroys. Sometimes, it destroys more than it creates.  But it is never one-sided.

The invention of the printing press is an excellent example.  Printing fostered the modern idea of individuality but it destroyed the medieval sense of community and social integration. Printing created prose but made poetry into an exotic and elitist form of expression. Printing made modern science possible but transformed religious sensibility into an exercise in superstition. Printing assisted in the growth of the nation-state but, in so doing, made patriotism into a sordid if not a murderous emotion.

In the case of computer technology, there can be no disputing that the computer has increased the power of large-scale organizations like military establishments or airline companies or banks or tax collecting agencies. And it is equally clear that the computer is now indispensable to high-level researchers in physics and other natural sciences. But to what extent has computer technology been an advantage to the masses of people? To steel workers, vegetable store owners, teachers, automobile mechanics, musicians, bakers, brick layers, dentists and most of the rest into whose lives the computer now intrudes? These people have had their private matters made more accessible to powerful institutions. They are more easily tracked and controlled; they are subjected to more examinations, and are increasingly mystified by the decisions made about them. They are more often reduced to mere numerical objects. They are being buried by junk mail. They are easy targets for advertising agencies and political organizations. The schools teach their children to operate computerized systems instead of teaching things that are more valuable to children. In a word, almost nothing happens to the losers that they need, which is why they are losers.

It is to be expected that the winners—for example, most of the speakers at this conference—will encourage the losers to be enthusiastic about computer technology. That is the way of winners, and so they sometimes tell the losers that with personal computers the average person can balance a checkbook more neatly, keep better track of recipes, and make more logical shopping lists. They also tell them that they can vote at home, shop at home, get all the information they wish at home, and thus make community life unnecessary. They tell them that their lives will be conducted more efficiently, discreetly neglecting to say from whose point of view or what might be the costs of such efficiency.

Should the losers grow skeptical, the winners dazzle them with the wondrous feats of computers, many of which have only marginal relevance to the quality of the losers’ lives but which are nonetheless impressive. Eventually, the losers succumb, in part because they believe that the specialized knowledge of the masters of a computer technology is a form of wisdom. The masters, of course, come to believe this as well.  The result is that certain questions do not arise, such as, to whom will the computer give greater power and freedom, and whose power and freedom will be reduced?

Now, I have perhaps made all of this sound like a well-planned conspiracy, as if the winners know all too well what is being won and what lost. But this is not quite how it happens, for the winners do not always know what they are doing, and where it will all lead. The Benedictine monks who invented the mechanical clock in the 12th and 13th centuries believed that such a clock would provide a precise regularity to the seven periods of devotion they were required to observe during the course of the day.  As a matter of fact, it did. But what the monks did not realize is that the clock is not merely a means of keeping track of the hours but also of synchronizing and controlling the actions of men. And so, by the middle of the 14th century, the clock had moved outside the walls of the monastery, and brought a new and precise regularity to the life of the workman and the merchant. The mechanical clock made possible the idea of regular production, regular working hours, and a standardized product. Without the clock, capitalism would have been quite impossible. And so, here is a great paradox: the clock was invented by men who wanted to devote themselves more rigorously to God; and it ended as the technology of greatest use to men who wished to devote themselves to the accumulation of money. Technology always has unforeseen consequences, and it is not always clear, at the beginning, who or what will win, and who or what will lose.

I might add, by way of another historical example, that Johann Gutenberg was by all accounts a devoted Christian who would have been horrified to hear Martin Luther, the accursed heretic, declare that printing is “God’s highest act of grace, whereby the business of the Gospel is driven forward.” Gutenberg thought his invention would advance the cause of the Holy Roman See, whereas in fact, it turned out to bring a revolution which destroyed the monopoly of the Church.

We may well ask ourselves, then, is there something that the masters of computer technology think they are doing for us which they and we may have reason to regret? I believe there is, and it is suggested by the title of my talk, “Informing Ourselves to Death”. In the time remaining, I will try to explain what is dangerous about the computer, and why. And I trust you will be open enough to consider what I have to say. Now, I think I can begin to get at this by telling you of a small experiment I have been conducting, on and off, for the past several years. There are some people who describe the experiment as an exercise in deceit and exploitation but I will rely on your sense of humor to pull me through.

Here’s how it works: It is best done in the morning when I see a colleague who appears not to be in possession of a copy of {The New York Times}. “Did you read The Times this morning?,” I ask. If the colleague says yes, there is no experiment that day. But if the answer is no, the experiment can proceed. “You ought to look at Page 23,” I say. “There’s a fascinating article about a study done at Harvard University.”  “Really? What’s it about?” is the usual reply. My choices at this point are limited only by my imagination. But I might say something like this: “Well, they did this study to find out what foods are best to eat for losing weight, and it turns out that a normal diet supplemented by chocolate eclairs, eaten six times a day, is the best approach. It seems that there’s some special nutrient in the eclairs—encomial dioxin—that actually uses up calories at an incredible rate.”

Another possibility, which I like to use with colleagues who are known to be health conscious is this one: “I think you’ll want to know about this,” I say. “The neuro-physiologists at the University of Stuttgart have uncovered a connection between jogging and reduced intelligence. They tested more than 1200 people over a period of five years, and found that as the number of hours people jogged increased, there was a corresponding decrease in their intelligence. They don’t know exactly why but there it is.”

I’m sure, by now, you understand what my role is in the experiment: to report something that is quite ridiculous—one might say, beyond belief. Let me tell you, then, some of my results: Unless this is the second or third time I’ve tried this on the same person, most people will believe or at least not disbelieve what I have told them. Sometimes they say: “Really? Is that possible?” Sometimes they do a double-take, and reply, “Where’d you say that study was done?” And sometimes they say, “You know, I’ve heard something like that.”

Now, there are several conclusions that might be drawn from these results, one of which was expressed by H. L. Mencken fifty years ago when he said, there is no idea so stupid that you can’t find a professor who will believe it. This is more of an accusation than an explanation but in any case I have tried this experiment on non-professors and get roughly the same results. Another possible conclusion is one expressed by George Orwell—also about 50 years ago—when he remarked that the average person today is about as naive as was the average person in the Middle Ages. In the Middle Ages people believed in the authority of their religion, no matter what. Today, we believe in the authority of our science, no matter what.

But I think there is still another and more important conclusion to be drawn, related to Orwell’s point but rather off at a right angle to it. I am referring to the fact that the world in which we live is very nearly incomprehensible to most of us. There is almost no fact—whether actual or imagined—that will surprise us for very long, since we have no comprehensive and consistent picture of the world which would make the fact appear as an unacceptable contradiction. We believe because there is no reason not to believe. No social, political, historical, metaphysical, logical or spiritual reason. We live in a world that, for the most part, makes no sense to us. Not even technical sense. I don’t mean to try my experiment on this audience, especially after having told you about it, but if I informed you that the seats you are presently occupying were actually made by a special process which uses the skin of a Bismark herring, on what grounds would you dispute me? For all you know—indeed, for all I know—the skin of a Bismark herring could have made the seats on which you sit. And if I could get an industrial chemist to confirm this fact by describing some incomprehensible process by which it was done, you would probably tell someone tomorrow that you spent the evening sitting on a Bismark herring.

Perhaps I can get a bit closer to the point I wish to make with an analogy: If you opened a brand-new deck of cards, and started turning the cards over, one by one, you would have a pretty good idea of what their order is. After you had gone from the ace of spades through the nine of spades, you would expect a ten of spades to come up next. And if a three of diamonds showed up instead, you would be surprised and wonder what kind of deck of cards this is. But if I gave you a deck that had been shuffled twenty times, and then asked you to turn the cards over, you would not expect any card in particulara three of diamonds would be just as likely as a ten of spades. Having no basis for assuming a given order, you would have no reason to react with disbelief or even surprise to whatever card turns up.

The point is that, in a world without spiritual or intellectual order, nothing is unbelievable; nothing is predictable, and therefore, nothing comes as a particular surprise.

In fact, George Orwell was more than a little unfair to the average person in the Middle Ages. The belief system of the Middle Ages was rather like my brand-new deck of cards. There existed an ordered, comprehensible world-view, beginning with the idea that all knowledge and goodness come from God. What the priests had to say about the world was derived from the logic of their theology. There was nothing arbitrary about the things people were asked to believe, including the fact that the world itself was created at 9 AM on October 23 in the year 4004 B. C. That could be explained, and was, quite lucidly, to the satisfaction of anyone. So could the fact that 10,000 angels could dance on the head of a pin. It made quite good sense, if you believed that the Bible is the revealed word of God and that the universe is populated with angels. The medieval world was, to be sure, mysterious and filled with wonder, but it was not without a sense of order. Ordinary men and women might not clearly grasp how the harsh realities of their lives fit into the grand and benevolent design, but they had no doubt that there was such a design, and their priests were well able, by deduction from a handful of principles, to make it, if not rational, at least coherent.

The situation we are presently in is much different. And I should say, sadder and more confusing and certainly more mysterious. It is rather like the shuffled deck of cards I referred to. There is no consistent, integrated conception of the world which serves as the foundation on which our edifice of belief rests. And therefore, in a sense, we are more naive than those of the Middle Ages, and more frightened, for we can be made to believe almost anything. The skin of a Bismark herring makes about as much sense as a vinyl alloy or encomial dioxin.

Now, in a way, none of this is our fault. If I may turn the wisdom of Cassius on its head: the fault is not in ourselves but almost literally in the stars. When Galileo turned his telescope toward the heavens, and allowed Kepler to look as well, they found no enchantment or authorization in the stars, only geometric patterns and equations. God, it seemed, was less of a moral philosopher than a master mathematician. This discovery helped to give impetus to the development of physics but did nothing but harm to theology. Before Galileo and Kepler, it was possible to believe that the Earth was the stable center of the universe, and that God took a special interest in our affairs. Afterward, the Earth became a lonely wanderer in an obscure galaxy in a hidden corner of the universe, and we were left to wonder if God had any interest in us at all. The ordered, comprehensible world of the Middle Ages began to unravel because people no longer saw in the stars the face of a friend.

And something else, which once was our friend, turned against us, as well. I refer to information. There was a time when information was a resource that helped human beings to solve specific and urgent problems of their environment. It is true enough that in the Middle Ages, there was a scarcity of information but its very scarcity made it both important and usable. This began to change, as everyone knows, in the late 15th century when a goldsmith named Gutenberg, from Mainz, converted an old wine press into a printing machine, and in so doing, created what we now call an information explosion. Forty years after the invention of the press, there were printing machines in 110 cities in six different countries; 50 years after, more than eight million books had been printed, almost all of them filled with information that had previously not been available to the average person. Nothing could be more misleading than the idea that computer technology introduced the age of information. The printing press began that age, and we have not been free of it since.

But what started out as a liberating stream has turned into a deluge of chaos. If I may take my own country as an example, here is what we are faced with: In America, there are 260,000 billboards; 11,520 newspapers; 11,556 periodicals; 27,000 video outlets for renting tapes; 362 million tv sets; and over 400 million radios. There are 40,000 new book titles published every year (300,000 world-wide) and every day in America 41 million photographs are taken, and just for the record, over 60 billion pieces of advertising junk mail come into our mail boxes every year. Everything from telegraphy and photography in the 19th century to the silicon chip in the twentieth has amplified the din of information, until matters have reached such proportions today that for the average person, information no longer has any relation to the solution of problems.

The tie between information and action has been severed. Information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one’s status. It comes indiscriminately, directed at no one in particular, disconnected from usefulness; we are glutted with information, drowning in information, have no control over it, don’t know what to do with it.

And there are two reasons we do not know what to do with it. First, as I have said, we no longer have a coherent conception of ourselves, and our universe, and our relation to one another and our world. We no longer know, as the Middle Ages did, where we come from, and where we are going, or why. That is, we don’t know what information is relevant, and what information is irrelevant to our lives. Second, we have directed all of our energies and intelligence to inventing machinery that does nothing but increase the supply of information. As a consequence, our defenses against information glut have broken down; our information immune system is inoperable. We don’t know how to filter it out; we don’t know how to reduce it; we don’t know to use it. We suffer from a kind of cultural AIDS.

Link: Rural > City > Cyberspace

A series of psychological studies over the past 20 years has revealed that after spending time in a quiet rural setting, close to nature, people exhibit greater attentiveness, stronger memory, and generally improved cognition. Their brains become both calmer and sharper. The reason, according to attention restoration theory, or ART, is that when people aren’t being bombarded by external stimuli, their brains can, in effect, relax. They no longer have to tax their working memories by processing a stream of bottom-up distractions. The resulting state of contemplativeness strengthens their ability to control their mind.

The results of the most recent such study were published in Psychological Science at the end of 2008. A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous and mentally fatiguing series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.

The researchers then conducted a similar experiment with another set of people. Rather than taking walks between the rounds of testing, these subjects simply looked at photographs of either calm rural scenes or busy urban ones. The results were the same. The people who looked at pictures of nature scenes were able to exert substantially stronger control over their attention, while those who looked at city scenes showed no improvement in their attentiveness. “In sum,” concluded the researchers, “simple and brief interactions with nature can produce marked increases in cognitive control.” Spending time in the natural world seems to be of “vital importance” to “effective cognitive functioning.”

There is no Sleepy Hollow on the internet, no peaceful spot where contemplativeness can work its restorative magic. There is only the endless, mesmerizing buzz of the urban street. The stimulations of the web, like those of the city, can be invigorating and inspiring. We wouldn’t want to give them up. But they are, as well, exhausting and distracting. They can easily, as Hawthorne understood, overwhelm all quieter modes of thought. One of the greatest dangers we face as we automate the work of our minds, as we cede control over the flow of our thoughts and memories to a powerful electronic system, is the one that informs the fears of both the scientist Joseph Weizenbaum and the artist Richard Foreman: a slow erosion of our humanness and our humanity.

It’s not only deep thinking that requires a calm, attentive mind. It’s also empathy and compassion. Psychologists have long studied how people experience fear and react to physical threats, but it’s only recently that they’ve begun researching the sources of our nobler instincts. What they’re finding is that, as Antonio Damasio, the director of USC’s Brain and Creativity Institute, explains, the higher emotions emerge from neural processes that “are inherently slow.” In one recent experiment, Damasio and his colleagues had subjects listen to stories describing people experiencing physical or psychological pain. The subjects were then put into a magnetic resonance imaging machine and their brains were scanned as they were asked to remember the stories. The experiment revealed that while the human brain reacts very quickly to demonstrations of physical pain – when you see someone injured, the primitive pain centers in your own brain activate almost instantaneously – the more sophisticated mental process of empathizing with psychological suffering unfolds much more slowly. It takes time, the researchers discovered, for the brain “to transcend immediate involvement of the body” and begin to understand and to feel “the psychological and moral dimensions of a situation.”

The experiment, say the scholars, indicates that the more distracted we become, the less able we are to experience the subtlest, most distinctively human forms of empathy, compassion, and other emotions. “For some kinds of thoughts, especially moral decision-making about other people’s social and psychological situations, we need to allow for adequate time and reflection,” cautions Mary Helen Immordino-Yang, a member of the research team. “If things are happening too fast, you may not ever fully experience emotions about other people’s psychological states.” It would be rash to jump to the conclusion that the internet is undermining our moral sense. It would not be rash to suggest that as the net reroutes our vital paths and diminishes our capacity for contemplation, it is altering the depth of our emotions as well as our thoughts.

There are those who are heartened by the ease with which our minds are adapting to the web’s intellectual ethic. “Technological progress does not reverse,” writes aWall Street Journal columnist, “so the trend toward multitasking and consuming many different types of information will only continue.” We need not worry, though, because our “human software” will in time “catch up to the machine technology that made the information abundance possible.” We’ll “evolve” to become more agile consumers of data. The writer of a cover story in New Yorkmagazine says that as we become used to “the 21st-century task” of “fitting” among bits of online information, “the wiring of the brain will inevitably change to deal more efficiently with more information.” We may lose our capacity “to concentrate on a complex task from beginning to end,” but in recompense we’ll gain new skills, such as the ability to “conduct 34 conversations simultaneously across six different media.” A prominent economist writes, cheerily, that “the web allows us to borrow cognitive strengths from autism and to be better infovores.” An Atlantic author suggests that our “technology-induced ADD” may be “a short-term problem,” stemming from our reliance on “cognitive habits evolved and perfected in an era of limited information flow.” Developing new cognitive habits is “the only viable approach to navigating the age of constant connectivity.”

These writers are certainly correct in arguing that we’re being molded by our new information environment. Our mental adaptability, built into the deepest workings of our brains, is a keynote of intellectual history. But if there’s comfort in their reassurances, it’s of a very cold sort. Adaptation leaves us better suited to our circumstances, but qualitatively it’s a neutral process. What matters in the end is not our becoming but what we become. In the 1950s, Martin Heidegger observed that the looming “tide of technological revolution” could “so captivate, bewitch, dazzle, and beguile man that calculative thinking may someday come to be accepted and practiced as the only way of thinking.” Our ability to engage in “meditative thinking,” which he saw as the very essence of our humanity, might become a victim of headlong progress. The tumultuous advance of technology could, like the arrival of the locomotive at the Concord station, drown out the refined perceptions, thoughts, and emotions that arise only through contemplation and reflection. The “frenziedness of technology,” Heidegger wrote, threatens to “entrench itself everywhere.”

It may be that we are now entering the final stage of that entrenchment. We are welcoming the frenziedness into our souls.

Link: Forever Alone: Why Loneliness Matters in the Social Age

I got up and went over and looked out the window. I felt so lonesome, all of a sudden. I almost wished I was dead. Boy, did I feel rotten. I felt so damn lonesome. I just didn’t want to hang around any more. It made me too sad and lonesome.

— J.D. Salinger in Catcher in the Rye

Loneliness was a problem I experienced most poignantly in college. In the three years I spent at Carnegie Mellon, the crippling effects of loneliness slowly pecked away at my enthusiasm for learning and for life, until I was drowning in an endless depressive haze that never completely cleared until I left Pittsburgh.

It wasn’t for lack of trying either. At the warm behest of the orientation counselors, I joined just the right number of clubs, participated in most of the dorm activities, and tried to expand my social portfolio as much as possible.

None of it worked.

To the extent that I sought out CAPS (our student psych and counseling service) for help, the platitudes they offered as advice (“Just put yourself out there!”) only served to confirm my suspicion that loneliness isn’t a very visible problem. (After all, the cure for loneliness isn’t exactly something that could be prescribed. “Have you considered transferring?” they finally suggested, after exhausting their list of thought-terminating clichés. I graduated early instead.)

As prolonged loneliness took its toll, I became very unhappy—to put it lightly—and even in retrospect I have difficulty pinpointing a specific cause. It wasn’t that I didn’t know anyone or failed to make any friends, and it wasn’t that I was alonemore than I liked.

Sure, I could point my finger at the abysmally fickle weather patterns of Pittsburgh, or the pseudo-suburban bubble that envelops the campus. There might even be a correlation between my academic dissonance with computer science and my feelings of loneliness. I might also just be an extremely unlikable person.

For whatever the reason (or a confluence thereof) the reality remained that I struggled with loneliness throughout my time in college.

+++

I recall a conversation with my friend Dev one particular evening on the patio of our dormitory. It was the beginning of my junior and last year at CMU, and I had just finished throwing an ice cream party for the residents I oversaw as an RA.

“Glad to be back?” he asked as he plopped down on a lawn chair beside me.

“No, not really.”

The sun was setting, and any good feelings about the upcoming semester with it. We made small talk about the school in general, as he had recently transferred, but eventually Dev asked me if I was happy there.

“No, not really.”

“Why do you think you’re so miserable here?”

“I don’t know. A lot of things, I guess. But mostly because I feel lonely. Like I don’t belong, like I can’t relate to or connect with anyone on an emotional level. I haven’t made any quality relationships here that I would look back on with any fond memories. Fuck… I don’t know what to do.”

College, at least for me, was a harrowing exercise in how helplessly debilitating, hopelessly soul-crushing, and at times life-threatening loneliness could be. It’s a problem nobody talks about, and it’s been a subject of much personal relevance and interest.

Loneliness as a Health Problem

A recent article published on Slate outlines the hidden dangers of social isolation. Chronic loneliness, as Jessica Olien discovered, poses serious health risks that not only impact mental health but physiological well-being as well.

The lack of quality social relationships in a person’s life has been linked to an increased mortality risk comparable to smoking and alcohol consumption and exceeds the influence of other risk factors like physical inactivity and obesity. It’s hard to brush off loneliness as a character flaw or an ephemeral feeling when you realize it kills more people than obesity.

Research also shows that loneliness diminishes sleep quality and impairs physiological function, in some cases reducing immune function and boosting inflammation, which increases risk for diabetes and heart disease.

Why hasn’t loneliness gotten much attention as a medical problem? Olien shares the following observation:

As a culture we obsess over strategies to prevent obesity. We provide resources to help people quit smoking. But I have never had a doctor ask me how much meaningful social interaction I am getting. Even if a doctor did ask, it is not as though there is a prescription for meaningful social interaction.

As a society we look down upon those who admit to being lonely, we cast and ostracize them with labels like “loners” insofar as they prefer to hide behind shame and doubt rather than speak up. This dynamic only makes it harder to devise solutions to what is clearly a larger societal issue, and it certainly brings to question the effects of culture on our perception of loneliness as a problem.

Loneliness as a Culture Problem

Stephen Fry, in a blog post titled Only the Lonely which explains his suicide attempt last year, describes in detail his struggle with depression. His account offers a rare and candid glimpse into the reality of loneliness with which those afflicted often hide from the public:

Lonely? I get invitation cards through the post almost every day. I shall be in the Royal Box at Wimbledon and I have serious and generous offers from friends asking me to join them in the South of France, Italy, Sicily, South Africa, British Columbia and America this summer. I have two months to start a book before I go off to Broadway for a run of Twelfth Night there.

I can read back that last sentence and see that, bipolar or not, if I’m under treatment and not actually depressed, what the fuck right do I have to be lonely, unhappy or forlorn? I don’t have the right. But there again I don’t have the right not to have those feelings. Feelings are not something to which one does or does not have rights.

In the end loneliness is the most terrible and contradictory of my problems.

In the United States, approximately 60 million people, or 20% of the population, feel lonely. According to the General Social Survey, between 1985 and 2004, the number of people with whom the average American discusses important mattersdecreased from three to two, and the number with no one to discuss important matters with tripled.

Modernization has been cited as a reason for the intensification of loneliness in every society around the world, attributed to greater migration, smaller household sizes, and a larger degree of media consumption.

In Japan, loneliness is an even more pervasive, layered problem mired in cultural parochialisms. Gideon Lewis-Kraus pens a beautiful narrative on Harper’s in which he describes his foray into the world of Japanese co-sleeping cafés:

“Why do you think he came here, to the sleeping café?”

“He wanted five-second hug maybe because he had no one to hug. Japan ishaji culture. Shame. Is shame culture. Or maybe also is shyness. I don’t know why. Tokyo people … very alone. And he does not have … ” She thought for a second, shrugged, reached for her phone. “Please hold moment.”

She held it close to her face, multitouched the screen not with thumb and forefinger but with tiny forefinger and middle finger. I could hear another customer whispering in Japanese in the silk-walled cubicle at our feet. His co-sleeper laughed loudly, then laughed softly. Yukiko tapped a button and shone the phone at my face. The screen said COURAGE.

It took an enormous effort for me to come to terms with my losing battle with loneliness and the ensuing depression at CMU, and an even greater leap of faith to reach out for help. (That it was to no avail is another story altogether.) But what is even more disconcerting to me is that the general stigma against loneliness and mental health issues, hinging on an unhealthy stress culture, makes it hard for afflicted students to seek assistance at all.

As Olien puts it, “In a society that judges you based on how expansive your social networks appear, loneliness is difficult to fess up to. It feels shameful.”

To truly combat loneliness from a cultural angle, we need to start by examining our own fears about being alone and to recognize that as humans, loneliness is often symptomatic of our unfulfilled social needs. Most importantly, we need to accept that it’s okay to feel lonely. Fry, signing off on his heartfelt post, offers this insight:

Loneliness is not much written about (my spell-check wanted me to say that loveliness is not much written about—how wrong that is) but humankind is a social species and maybe it’s something we should think about more than we do.

Loneliness as a Technology Problem

Technology, and by extension media consumption in the Internet age, adds the most perplexing (and perhaps the most interesting) dimension to the loneliness problem. As it turns out, technology isn’t necessarily helping us feel more connected; in some cases, it makes loneliness worse.

The amount of time you spend on Facebook, as a recent study found, is inversely related to how happy you feel throughout the day.

Take a moment to watch this video.

It’s a powerful, sombering reminder that our growing dependence on technology to communicate has serious social repercussions, to which Cohen presents his central thesis:

We are lonely, but we’re afraid of intimacy, while the social networks offer us three gratifying fantasies: 1) That we can put our attention wherever we want it to be. 2) That we will always be heard. 3) That we will never have to be alone.

And that third idea, that we will never have to be alone, is central to changing our psyches. It’s shaping a new way of being. The best way to describe it is:

I share, therefore I am.

Public discourse on the cultural ramifications of technology is certainly not a recent development, and the general sentiment that our perverse obsession with sharing will be humanity’s downfall continues to echo in various forms around the web: articles proclaiming that Instagram is ruining people’s lives, the existence of a section on Reddit called cringepics where people congregate to ridicule things others post on the Internet, the increasing number of self-proclaimed “social media gurus” on Twitter, to name a few.

The signs seem to suggest we have reached a tipping point for “social” media that’s not very social on a personal level, but whether it means a catastrophic implosion or a gradual return to more authentic forms of interpersonal communications remains to be seen.

While technology has been a source of social isolation for many, it has the capacity to alleviate loneliness as well. A study funded by the online dating site eHarmony shows that couples who met online are less likely to divorce and achieve more marital satisfaction than those who met in real life.

The same model could potentially be applied to friendships, and it’s frustrating to see that there aren’t more startups leveraging this opportunity when the problem is so immediate and in need of solutions. It’s a matter of exposure and education on the truths of loneliness, and unfortunately we’re just not there yet.

+++

The perils of loneliness shouldn’t be overlooked in an increasingly hyperconnected world that often tells another story through rose-tinted lenses. Rather, the gravity of loneliness should be addressed and brought to light as a multifaceted problem, one often muted and stigmatized in our society. I learned firsthand how painfully real of a problem loneliness could be, and more should be done to spread its awareness and to help those affected.

“What do you think I should do?” I looked at Dev as the last traces of sunlight teetered over the top of Morewood Gardens. It was a rhetorical question—things weren’t about to get better.

“Find better people,” he replied.

I offered him a weak smile in return, but little did I know then how prescient those words were.

In the year that followed, I started a fraternity with some of the best kids I’d come to know (Dev included), graduated college and moved to San Francisco, made some of the best friends I’ve ever had, and never looked back, if only to remember, and remember well, that it’s never easy being lonely.

Link: "We Need to Talk About TED"

This is my rant against TED, placebo politics, “innovation,” middlebrow megachurch infotainment, etc., given atTEDx San Diego at their invitation (thank you to Jack Abbott and Felena Hanson). It’s very difficult to do anything interesting within the format, and even this seems like far too much of a ‘TED talk’, especially to me. In California R&D World, TED (and TED-ism) is unfortunately a key forum for how people communicate with one another. It’s weird, inadequate and symptomatic, to be sure, but it is one of ‘our’ key public squares, however degraded and captured. Obviously any sane intellectual wouldn’t go near it. Perhaps that’s why I was (am) curious about what (if any) reverberation my very minor heresy might have: probably nothing, and at worse an alibi and vaccine for TED to warn off the malaise that stalks them? We’ll have to see. The text of the talk is below, and was also published as an Op-Ed by The Guardian

In our culture, talking about the future is sometimes a polite way of saying things about the present that would otherwise be rude or risky.

But have you ever wondered why so little of the future promised in TED talks actually happens? So much potential and enthusiasm, and so little actual change. Are the ideas wrong? Or is the idea about what ideas can do all by themselves wrong?

I write about entanglements of technology and culture, how technologies enable the making of certain worlds, and at the same time how culture structures how those technologies will evolve, this way or that. It’s where philosophy and design intersect.

So the conceptualization of possibilities is something that I take very seriously. That’s why I, and many people, think it’s way passed time to take a step back and ask some serious questions about the intellectual viability of things like TED.

So my TED talk is not about my work or my new book—the usual spiel—but about TED itself, what it is and why it doesn’t work.

The first reason is over-simplification.

To be clear, I think that having smart people who do very smart things explain what they doing in a way that everyone can understand is a good thing. But TED goes way beyond that.

Let me tell you a story. I was at a presentation that a friend, an Astrophysicist, gave to a potential donor. I thought the presentation was lucid and compelling (and I’m a Professor of Visual Arts here at UC San Diego so at the end of the day, I know really nothing about Astrophysics). After the talk the sponsor said to him, “you know what, I’m gonna pass because I just don’t feel inspired… you should be more like Malcolm Gladwell.”

At this point I kind of lost it. Can you imagine?

Think about it: an actual scientist who produces actual knowledge should be more like a journalist who recycles fake insights! This is beyond popularization. This is taking something with value and substance  and coring it out so that it can be swallowed without chewing. This is not the solution to our most frightening problems—rather this is one of our most frightening problems.

So I ask the question: does TED epitomize a situation in which a scientist (or an artist or philosopher or activist or whoever) is told that their work is not worthy of support, because the public doesn’t feel good listening to them?

I submit that Astrophysics run on the model of American Idol is a recipe for civilizational disaster.

What is TED?

So what is TED exactly?

Perhaps it’s the proposition that if we talk about world-changing ideas enough, then the world will change.  But this is not true, and that’s the second problem.

TED of course stands for Technology, Entertainment, Design, and I’ll talk a bit about all three. I Think TED actually stands for: middlebrow megachurch infotainment

The key rhetorical device for TED talks is a combination of epiphany and personal testimony (an “epiphimony” if you like ) through which the speaker shares a personal journey of insight and realization, its triumphs and tribulations.

What is it that the TED audience hopes to get from this? A vicarious insight, a fleeting moment of wonder, an inkling that maybe it’s all going to work out after all? A spiritual buzz?

I’m sorry but this fails to meet the challenges that we are supposedly here to confront. These are  complicated and difficult and are not given to tidy just-so solutions. They don’t care about anyone’s experience of optimism. Given the stakes, making our best and brightest waste their time –and the audience’s time— dancing like infomercial hosts is too high a price. It is cynical.

Also, it just doesn’t work.

Recently there was a bit of a dust up when TED Global sent out a note to TEDx organizers asking them not to not book speakers whose work spans the paranormal, the conspiratorial, New Age “quantum neuroenergy,” etc: what is called Woo. Instead of these placebos, TEDx should instead curate talks that are imaginative but grounded in reality.  In fairness, they took some heat, so their gesture should be acknowledged. A lot of people take TED very seriously, and might lend credence to specious ideas if stamped with TED credentials. “No” to placebo science and medicine.

But…the corollaries of placebo science and placebo medicine are placebo politics and placebo innovation. On this point, TED has a long ways to go.

Perhaps the pinnacle of placebo politics and innovation was featured at TEDx San Diego in 2011. You’re familiar I assume with Kony2012, the social media campaign to stop war crimes in central Africa? So what happened here? Evangelical surfer Bro goes to help kids in Africa. He makes a campy video explaining genocide to the cast of Glee. The world finds his public epiphany to be shallow to the point of self-delusion. The complex geopolitics of Central Africa are left undisturbed. Kony’s still there. The end.

You see, when inspiration becomes manipulation, inspiration becomes obfuscation. If you are not cynical you should be skeptical. You should be as skeptical of placebo politics as you are placebo medicine.

T and Technology

T - E - D. I’ll go through them each quickly.

So first Technology…

We hear that not only is change accelerating but that the pace of change is accelerating as well.

While this is true of computational carrying-capacity at a planetary level, at the same time—and in fact the two are connected—we are also in a moment of cultural de-acceleration.

We invest our energy in futuristic information technologies, including our cars, but drive them home to kitsch architecture copied from the 18th century. The future on offer is one in which everything changes, so long as everything stays the same. We’ll have Google Glass, but still also business casual.

This timidity is our path to the future? No, this is incredibly conservative, and there is no reason to think that more Gigaflops will inoculate us.

Because, if a problem is in fact endemic to a system, then the exponential effects of Moore’s Law also serve to amplify what’s broken. It is more computation along the wrong curve, and I don’t think this is necessarily a triumph of reason.

Part of my work explores deep technocultural shifts, from post-humanism to the post-anthropocene, but TED’s version has too much faith in technology, and not nearly enough commitment to technology. It is placebo technoradicalism, toying with risk so as to re-affirm the comfortable.

So our machines get smarter and we get stupider. But it doesn’t have to be like that. Both can be much more intelligent. Another futurism is possible.

E and Economics

A better ‘E’ in TED would stand for Economics, and the need for, yes imagining and designing, different systems of valuation, exchange, accounting of transaction externalities, financing of coordinated planning, etc. Because States plus Markets, States versus Markets, these are insufficient models, and our conversation is stuck in Cold War gear.

Worse is when economics is debated like metaphysics, as if the reality of a system is merely a bad example of the ideal.

Communism in theory is an egalitarian utopia.

Actually existing Communism meant ecological devastation, government spying, crappy cars and gulags.

Capitalism in theory is rocket ships, nanomedicine, and Bono saving Africa.

Actually existing Capitalism means Walmart jobs, McMansions, people living in the sewers under Las Vegas, Ryan Seacrest…plus —ecological devastation, government spying, crappy public transportation and for-profit prisons.

Our options for change range from basically what we have plus a little more Hayek, to what we have plus a little more Keynes. Why?

The most  recent centuries have seen extraordinary accomplishments in improving quality of life. The paradox is that the system we have now —whatever you want to call it— is in the short term what makes the amazing new technologies possible, but in the long run it is also what suppresses their full flowering.  Another economic architecture is prerequisite.

D and Design

Instead of our designers prototyping the same “change agent for good” projects over and over again, and then wondering why they don’t get implemented at scale, perhaps we should resolve that design is not some magic answer. Design matters a lot, but for very different reasons.  It’s easy to get enthusiastic about design because, like talking about the future, it is more polite than referring to white elephants in the room..

Such as…

Phones, drones and genomes, that’s what we do here in San Diego and La Jolla. In addition to the other  insanely great things these technologies do, they are the basis of NSA spying, flying robots killing people, and the wholesale privatization of  biological life itself. That’s also what we do.

The potential for these technologies are both wonderful and horrifying at the same time, and to make them serve good futures, design as “innovation” just isn’t a strong enough idea by itself. We need to talk more about design as “immunization,” actively preventing certain potential “innovations” that we do not want from happening.

And so…

As for one simple take away… I don’t have one simple take away, one magic idea. That’s kind of the point. I will say that if and when the key problems facing our species were to be solved, then perhaps many of us in this room would be out of work (and perhaps in jail).

But it’s not as though there is a shortage of topics for serious discussion. We need a deeper conversation about the difference between digital cosmopolitanism and Cloud Feudalism (and toward that, a queer history of computer science and Alan Turing’s birthday as holiday!)

I would like new maps of the world, ones not based on settler colonialism, legacy genomes and bronze age myths, but instead on something more… scalable.

TED today is not that.

Problems are not “puzzles” to be solved. That metaphor assumes that all the necessary pieces are already on the table, they just need to be re-arranged and re-programmed. It’s not true.

“Innovation” defined as moving the pieces around and adding more processing power is not some Big Idea that will disrupt a broken status quo: that precisely is the broken status quo.

One TED speaker said recently, “If you remove this boundary, …the only boundary left is our imagination.” Wrong.

If we really want transformation, we have to slog through the hard stuff (history, economics, philosophy, art, ambiguities, contradictions).  Bracketing it off to the side to focus just on technology, or just on innovation, actually prevents transformation.

Instead of dumbing-down the future, we need to raise the level of general understanding to the level of complexity of the systems in which we are embedded and which are embedded in us. This is not about “personal stories of inspiration,” it’s about the difficult and uncertain work of de-mystification and re-conceptualization: the hard stuff that really changes how we think. More Copernicus, less Tony Robbins.

At a societal level, the bottom line is if we invest things that make us feel good but which don’t work, and don’t invest things that don’t make us feel good but which may solve problems, then our fate is that it will just get harder to feel good about not solving problems.

In this case the placebo is worse than ineffective, it’s harmful. It’s diverts your interest, enthusiasm and outrage until it’s absorbed into this black hole of affectation.

Keep calm and carry on “innovating”… is that the real message of TED? To me that’s not inspirational, it’s cynical.

In the U.S. the right-wing has certain media channels that allow it to bracket reality… other constituencies have TED.  


"All Watched Over by Machines of Loving Grace" by Adam Curtis
A series of films about how humans have been colonized by the machines they have built. Although we don’t realize it, the way we see everything in the world today is through the eyes of the computers. It claims that computers have failed to liberate us and instead have distorted and simplified our view of the world around us.
1. Love and Power. This is the story of the dream that rose up in the 1990s that computers could create a new kind of stable world. They would bring about a new kind global capitalism free of all risk and without the boom and bust of the past. They would also abolish political power and create a new kind of democracy through the Internet where millions of individuals would be connected as nodes in cybernetic systems - without hierarchy.
2. The Use and Abuse of Vegetational Concepts. This is the story of how our modern scientific idea of nature, the self-regulating ecosystem, is actually a machine fantasy. It has little to do with the real complexity of nature. It is based on cybernetic ideas that were projected on to nature in the 1950s by ambitious scientists. A static machine theory of order that sees humans, and everything else on the planet, as components - cogs - in a system.
3. The Monkey in the Machine and the Machine in the Monkey. This episode looks at why we humans find this machine vision so beguiling. The film argues it is because all political dreams of changing the world for the better seem to have failed - so we have retreated into machine-fantasies that say we have no control over our actions because they excuse our failure.

"All Watched Over by Machines of Loving Grace" by Adam Curtis

A series of films about how humans have been colonized by the machines they have built. Although we don’t realize it, the way we see everything in the world today is through the eyes of the computers. It claims that computers have failed to liberate us and instead have distorted and simplified our view of the world around us.

1. Love and Power. This is the story of the dream that rose up in the 1990s that computers could create a new kind of stable world. They would bring about a new kind global capitalism free of all risk and without the boom and bust of the past. They would also abolish political power and create a new kind of democracy through the Internet where millions of individuals would be connected as nodes in cybernetic systems - without hierarchy.

2. The Use and Abuse of Vegetational Concepts. This is the story of how our modern scientific idea of nature, the self-regulating ecosystem, is actually a machine fantasy. It has little to do with the real complexity of nature. It is based on cybernetic ideas that were projected on to nature in the 1950s by ambitious scientists. A static machine theory of order that sees humans, and everything else on the planet, as components - cogs - in a system.

3. The Monkey in the Machine and the Machine in the Monkey. This episode looks at why we humans find this machine vision so beguiling. The film argues it is because all political dreams of changing the world for the better seem to have failed - so we have retreated into machine-fantasies that say we have no control over our actions because they excuse our failure.

Link: Pandora's Vox

Carmen “humdog” Hermosillo’s essay Pandora’s Vox, an analysis of internet communities, remains startlingly accurate 20 years later. 

When I went into cyberspace I went into it thinking that it was a place like any other place and that it would be a human interaction like any other human interaction. I was wrong when I thought that. It was a terrible mistake. 



The very first understanding that I had that it was not a place like any place and that the interaction would be different was when people began to talk to me as though I were a man. When they wrote about me in the third person, they would say ‘he.’ it interested me to have people think I was ‘he’ instead of ‘she’ and so at first I did not say anything. I grinned and let them think I was ‘he.’ this went on for a little while and it was fun but after a while I was uncomfortable. Finally I said unto them that I, humdog, was a woman and not a man. This surprised them. At that moment I realized that the dissolution of gender-category was something that was happening everywhere, and perhaps it was only just very obvious on the net. This is the extent of my homage to Gender On The Net. 



I suspect that cyberspace exists because it is the purest manifestation of the mass (masse) as Jean Beaudrilliard described it. It is a black hole; it absorbs energy and personality and then re-presents it as spectacle. People tend to express their vision of the mass as a kind of imaginary parade of blue-collar workers, their muscle-bound arms raised in defiant salute. Sometimes in this vision they are holding wrenches in their hands. Anyway, this image has its origins in Marx and it is as Romantic as a dozen long-stemmed red roses. The mass is more like one of those faceless dolls you find in nostalgia-craft shops: limp, cute, and silent. When I say ‘cute’ I am including its macabre and sinister aspects within my definition. 



It is fashionable to suggest that cyberspace is some kind of _island of the blessed_ where people are free to indulge and express their Individuality. Some people write about cyberspace as though it were a 60′s utopia. In reality, this is not true. Major online services, like CompuServe and America online, regularly guide and censor discourse. Even some allegedly free-wheeling (albeit politically correct) boards like the WELL censor discourse. The difference is only a matter of the method and degree. What interests me about this, however, is that to the mass, the debate about freedom of expression exists only in terms of whether or not you can say fuck or look at sexually explicit pictures. I have a quaint view that makes me think that discussing the ability to write ‘fuck’ or worrying about the ability to look at pictures of sexual acts constitutes The Least Of Our Problems surrounding freedom of expression. 



Western society has a problem with appearance and reality. It wants to split them off from each other, make one more real than the other, and invest one with more meaning than it does the other. There is two people who have something to say about this: Nietzsche and Baudrilliard. I invoke his or her names in case somebody thinks I made this up. Nietzsche thinks that the conflict over these ideas cannot be resolved. Baudrilliard thinks that it was resolved and that this is how come some people think that communities can be virtual: we prefer simulation (simulacra) to reality. Image and simulacra exert tremendous power upon culture. And it is this tension that informs all the debates about Real and Not-Real that infect cyberspace with regards to identity, relationship, gender, discourse, and community. Almost every discussion in cyberspace, about cyberspace, boils down to some sort of debate about Truth-In-Packaging. 



Cyberspace is a mostly a silent place. In its silence it shows itself to be an expression of the mass. One might question the idea of silence in a place where millions of user-ids parade around like angels of light, looking to see whom they might, so to speak, consume. The silence is nonetheless present and it is most present, paradoxically at the moment that the user-id speaks. When the user-id posts to a board, it does so while dwelling within an illusion that no one is present. Language in cyberspace is a frozen landscape. 



I have seen many people spill their guts on-line, and I did so myself until, at last, I began to see that I had commoditized myself. Commodification means that you turn something into a product, which has a money-value. In the nineteenth century, commodities were made in factories, which Karl Marx called ‘the means of production.’ capitalists were people who owned the means of production, and the commodities were made by workers who were mostly exploited. I created my interior thoughts as a means of production for the corporation that owned the board I was posting to, and that commodity was being sold to other commodity/consumer entities as entertainment. That means that I sold my soul like a tennis shoe and I derived no profit from the sale of my soul. People who post frequently on boards appear to know that they are factory equipment and tennis shoes, and sometimes trade sends and email about how their contributions are not appreciated by management. 

As if this were not enough, all of my words were made immortal by means of tape backups. Furthermore, I was paying two bucks an hour for the privilege of commodifying and exposing myself. Worse still, I was subjecting myself to the possibility of scrutiny by such friendly folks as the FBI: they can, and have, downloaded pretty much whatever they damn well please. The rhetoric in cyberspace is liberation-speak. The reality is that cyberspace is an increasingly efficient tool of surveillance with which people have a voluntary relationship. 



Carmen “humdog” Hermosillo’s essay Pandora’s Vox, an analysis of internet communities, remains startlingly accurate 20 years later. You may recognize parts of it from Adam Curtis’ documentary All Watched Over by Machines of Loving Grace.

Proponents of so-called cyber-communities rarely emphasize the economic, business-mind nature of the community: many cyber-communities are businesses that rely upon the commodification of human interaction. They market their businesses by appeal to hysterical identification and fetishism no more or less than the corporations that brought us the two hundred dollar athletic shoe. Proponents of cyber- community do not often mention that these conferencing systems are rarely culturally or ethnically diverse, although they are quick to embrace the idea of cultural and ethnic diversity. They rarely address the whitebeard demographics of cyberspace except when these demographics conflict with the upward-mobility concerns of white, middle class females under the rubric of orthodox academic Feminism.

Link: Twitter: First Thought, Worst Thought

It’s fascinating and horrifying to observe the spectacles of humiliation generated by social media.

One of the strange and slightly creepy pleasures that I get from using Twitter is observing, in real time, the disappearance of words from my stream as they are deleted by their regretful authors. It’s a rare and fleeting sight, this emergency recall of language, and I find it touching, as though the person had reached out to pluck his words from the air before they could set about doing their disastrous work in the world, making their author seem boring or unfunny or ignorant or glib or stupid. And whenever this happens, I find myself wanting to know what caused this sudden reversal. What were the tweet’s defects? Was it a simple typo? Was there some fatal miscalculation of humor or analysis? Was it a clumsily calibrated subtweet? What, in other words, was the proximity to disaster? I, too, have deleted the occasional tweet; I know the sudden chill of having said something misjudged or stupid, the panicked fumble to strike it from the official record of utterance, and the furtive hope that nobody had time to read it.

Any act of writing creates conditions for the author’s possible mortification. There is, I think, a trace of shame in the very enterprise of tweeting, a certain low-level ignominy to asking a question that receives no response, to offering up a witticism that fails to make its way in the world, that never receives the blessing of being retweeted or favorited. The stupidity and triviality of this worsens, rather than alleviates, the shame, adding to the experience a kind of second-order shame: a shame about the shame. My point, I suppose, is that the possibility of embarrassment is ever-present with Twitter—it inheres in the form itself unless you’re the kind of charmed (or cursed) soul for whom embarrassment is never a possibility to begin with.

It’s fascinating and horrifying to observe the spectacles of humiliation generated by social media at seemingly decreasing intervals, to witness the speed and efficiency with which individuals are isolated and subjected to mass paroxysms of ridicule and condemnation. You may remember that moment, way back in the dying days of 2013, when, in the minutes before boarding a flight to South Africa, a P.R. executive named Justine Sacco tweeted “Going to Africa. Hope I don’t get AIDS. Just kidding! I’m white.” In the twelve hours that she spent en route to Cape Town, aloft and offline, she became the unknowing subject of a kind of ruinous flash-fame: her tweet was posted on Gawker and went viral, drawing the anger and derision of thousands of people who knew only two things about her: that she was the author of this twelve-word disaster of misfired irony and that she was the director of corporate communications for the massive media conglomerate I.A.C. There was a barrage of violent misogyny, terrible in its blunt force and grim inevitability. Somebody sourced Sacco’s flight details, at which point the hashtag #HasJustineLandedYet started doing a brisk trade on Twitter. Somebody else took it upon himself to interview her father at the airport and post the details to Twitter, for the instruction and delight of the hashtag’s followers. The New York Times covered the story. Sacco touched down in Cape Town oblivious to the various ways, bizarre and very real, in which her life had changed. She was, in the end, swiftly and publicly fired.

This was not a celebrity or a politician tweeting something racist or offensive; Sacco was unknown, so this was not a case of a public reputation set off course by a single revealing misstep. This misstep was her public reputation. She will likely be remembered as “that P.R. person who tweeted that awful racist joke that time”; her identity will always be tethered to those four smugly telegraphic sentences, to the memory of how they provided a lightning rod for an electrical storm of anger about heedless white privilege and ignorant racial assumptions. Whether she was displaying these qualities or making a botched attempt at a self-reflexive joke about them—an interpretation which, intentional fallacy be damned, I find pretty plausible—didn’t, in the end, have much bearing on the affair. She became a symbol of everything that is ugly and wrong about the way white people think and don’t think about people of color, about the way the privileged of the planet think and don’t think about the poor. As Roxane Gay put it in an essay on her ambivalence about the public shaming of Sacco: “The world is full of unanswered injustice and more often than not we choke on it. When you consider everything we have to fight, it makes sense that so many people rally around something like the hashtag #HasJustineLandedYet. In this one small way, we are, for a moment, less impotent.”

As Sacco’s flight made its way south, over the heads of the people in whose name the Internet had decided she should be punished, I found myself trying to imagine what she might have been thinking. It was likely, of course, that the tweet wasn’t on her mind at all, that she was thinking about meeting her family at the arrivals lounge in Cape Town, looking forward to the Christmas holiday she was going to spend with them. But then I began imagining that she might, after all, have been thinking of her last tweet, maybe even having second thoughts about it. As early as her takeoff from Heathrow, perhaps, right as the plane broke through the surface of network signals, leaving behind the possibility of tweet-deletion, she may have realized how people would react to her joke, that it might be taken as a reflection of her own corruption or stupidity or malice. By that point, it would have been too late to do anything about it, too late to pluck her words from the air.

And, of course, I wasn’t really imagining Justine Sacco, of whom I knew and still know next to nothing but, rather, myself in her situation: the gathering panic I would feel if it had been me up there, running through the possible interpretations of the awful joke I’d just made and could not unmake—the various things, true and false, it could be taken to reveal about me.

In his strange and unsettling book “Humiliation,” the poet and essayist Wayne Koestenbaum writes about the way in which public humiliation “excites” his empathy. “By imagining what they feel, or might feel,” he writes, “I learn something about what I already feel, what I, as a human being, was born sensing: that we all live on the edge of humiliation, in danger of being deported to that unkind country.” Justine Sacco is a deportee now; I’m trying to imagine what it must be like for her there in that unkind country, those twelve words repeating themselves mindlessly over and over again in her head, how the phrase “Just kidding!”—J.K.! J.K.!—must by now have lost all meaning or have taken on a whole new significance. In this mode of trial and punishment, I sometimes think of social media as being like the terrible apparatus at the center of Kafka’s “In the Penal Colony”: a mechanism of corrective torture, harrowing the letters of the transgression into the bodies of the condemned.

The weird randomness of this sudden mutation of person into meme is, in the end, what’s so haunting. This could just as well have happened to anyone—any of the thousands of people who say awful things on Twitter every day. It’s not that Sacco didn’t deserve to be taken to task, to be scorned for the clumsiness and hurtfulness of her joke; it’s that the corrective was so radically comprehensive and obliterating, and administered with such collective righteous giddiness. This is a new form of violence, a symbolic ritual of erasure where the condemned is made to stand for a whole class of person—to be cast, as an effigy of the world’s general awfulness, into a sudden abyss of fame.

Link: Now is Not Forever: The Ancient Recent Past

Sometimes the Internet surprises us with the past or, to be more precise, its own past. The other day my social media feed started to show the same clip over and over. It was one I had seen years before and forgotten about, back from the bottom of that overwhelming ocean of content available to us at any given moment. Why was it reappearing now, I wondered?

That’s a hard question to answer under any circumstances. My teenage daughter regularly shows me Internet discoveries that date from the mid-2000s. To her, they are fresh; to me, a reminder of just how difficult it is to predict what the storms of the information age will turn up. In the case of the clip I started seeing again the other day, however, the reemergence seemed less than random.

It’s a two-minute feature from a San Francisco television station about the electronic future of journalism, but from way back in 1981, long before the Internet as we know it came into focus. While there is a wide range of film and television from that era readily accessible to us, much of which can be consumed without being struck dumb by its datedness — Scarface or the first Star Wars trilogy, to name two obvious examples — its surviving news broadcasts seem uncanny. Factor in the subject matter of this one, predicting a future that already feels past to us, and the effect is greatly enhanced.

The more I kept seeing this clip in my feed, though, the more clear it became that its uncanniness didn’t just derive from the original feature’s depiction of primitive modems and computer monitors — and a Lady Di hairsyle — but also the fact that it had returned from the depths of the Internet to remind us, once more, that we did see this world coming.

The information age is doing strange things to our sense of history. If you drive in the United States, particularly in warm-weather places like California or Florida, you won’t have to look too hard to see cars from the 1980s still on the road. But a computer from that era seems truly ancient, as out of sync with our own times as a horse and buggy.

Stranger still is the feeling of datedness that pervades the Internet’s own history. For someone my daughter’s age, imagining life before YouTube is as unsettling a prospect as imagining life before indoor plumbing. And yet, even though she was only seven when the site debuted, she was already familiar with the Internet before then.

But it isn’t just young people who feel cut off from the Internet that existed prior to contemporary social media. Even though I can go on the Wayback Machine to check out sites I was visiting in the 1990s; even though I contributed to one of the first Internet publications, Bad Subjects: Political Education For Everyday Life, and can still access its content with ease; even though I know firsthand what it was like before broadband, when I would wait minutes for a single news story to load, my memories still seem to fail me. I remember, but dimly. I can recall experiences from pre-school in vivid detail, yet struggle to flesh out my Internet past from a decade ago, before I started using Gmail.

What the clip that resurfaced the other day makes clear is that history is more subjective than ever. Some parts seem to be moving at more or less the same pace that they did decades or even centuries ago. But others, particularly those that focus on computer technology, appear ten or even a hundred times as fast. If you don’t believe me, try picking up the mobile phone you used in 2008.

When he was working on the Passagenwerk, his sprawling project centered on nineteenth-century Parisian shopping arcades, Walter Benjamin made special note of how outdated those proto-malls seemed, less than a century after they had first appeared. These days, the depths of the Internet are full of such places, dormant pages that unnerve us with their “ancient” character, even though they are less than a decade old.

As Mark Fisher brilliant explains in his book Capitalist Realism, we live at a time when it is easier to imagine the end of the world than the end of capitalism. But there are plenty of people who have just as much difficulty imagining the end of Facebook, even though some of them were on MySpace and Friendster before it. That’s what makes evidence like the clip I’ve been discussing here so important. We need to be reminded that we are capable of living different lives, that we have, in fact, already lived them, so that we can turn our attention to living the lives we actually want to lead.

Link: Jacques Ellul on Technology

Transcript of The Betrayal by Technology: A Portrait of Jacques Ellul by Jan van Boeckel and Karin van der Molen

1. One of my best friends is a very competent… was a very competent surgeon. During a discussion in which he participated, about the problems of technology and progress, someone said to him: “You, as a surgeon, surely know everything about the progress in surgery?”

He gave a humorous reply, as always: “I am certainly aware of the progress in the medical field. But just ask yourself the following question: currently, we carry out heart transplants, liver transplant and kidney transplants. But where do those kidneys, that heart and those lungs come from, in fact? They must be healthy organs. Not affected by an illness or the like. Moreover, they must be fresh. In fact, there is just one source: traffic accidents. So, to carry out more operations, we need more traffic accidents. If we make traffic safer, fewer of those wonderful operations will carried out.”

Of course, everyone was rather astonished and also somewhat shocked. It was very humorous, but it was also a real question.


2. Human technology is created from the moment that it is felt that people are unhappy. City dwellers, for example, live in a completely dead environment. Cities consist of brick, cement, concrete, and so on. People cannot be happy in such an environment. So they suffer psychological problems. Mainly as a result of their social climate but also as a result of the speed at which they are forced to live. Yet man is specifically suited for living amidst nature. So man becomes mentally ill. And for the relief of those psychological illnesses there is human technology, just as there is medical technology. But human technology must enable man to live in an unnatural environment. As in the case of deep sea diving. Divers have a deep sea diving suit and oxygen cylinders in order to survive in an abnormal environment. Human technology is just like that.

I know many people who like watching commercials because they’re so funny. They provide relaxation and diversion. People come home after a day’s work, from which they derive little satisfaction, and feel the need for diversion and amusement. The word diversion itself is already very significant. When Pascal uses the word diversion he means that people who follow the path of God deviate from the path which leads them to God as a result of diversion and amusement. Instead of thinking of God, they amuse themselves. So, instead of thinking about the problems which have been created by technology and our work we want to amuse ourselves. And that amusement is supplied to us by means of technology. But by means of technology which derives from human technology. For example, in a work situation people are offered the diversion which must serve as compensation. 

The media era is also the era of loneliness. That’s a very important fact. We can also see that in the young. In 1953 you had the so called “rebels without a cause”. Students who revolted in Stockholm. That was the first revolt of the young rebels without a cause. They had everything. They were happy. They lived in a nice society. They lacked nothing. And suddenly, on New Year’s Eve, they took to the streets and destroyed everything. No one could understand it. But they needed something different from consumption and technology.

If people lose their motive for living two things can happen. It only seldom happens that they can accept that fact. In that case, they develop suicidal tendencies. Usually, either they try to find refuge in diversion. We’ve already discussed this. Or they become depressed and begin swallowing medicines. So if people become aware of their situation they react to it as usually happens in Western society: they become depressed and discouraged. So they just don’t think about their situation and simply carry on. They drive faster and faster. Never mind where, as long as it’s fast.


3. One of the illusions which some try to put across to people today is to get them to believe that technology makes them more free. If you just use enough technical aids you will be freer. Free to do what? Free to eat nice things. That’s true, if you have money, that is. Free to buy a car so that you can travel. You can go all the way to the other side of the world. To Tahiti.  So you see: technology brings freedom. We can acquire knowledge in the whole world. That’s fantastic. So a world of freedom is open to us. Just to give a small example in connection of the use of cars: As soon as the holidays begin, three million Parisians decide independently to one another to head for the Mediterranean in their cars. Three million people all decide to do the same thing. So then I ask myself if the car really brings us much freedom. Those people haven’t given it a moment’s thought that they are, in fact, completely determined by technology and the life they lead. That, in fact, they form a mass. A coherent whole.


4. In a society such as ours, it is almost impossible for a person to be responsible. A simple example: a dam has been built somewhere, and it bursts. Who is responsible for that? Geologists worked out. They examined the terrain. Engineers drew up the construction plans. Workmen constructed it. And the politicians decided that the dam had to be in that spot. Who is responsible? No one. There is never anyone responsible. Anywhere. In the whole of our technological society the work is so fragmented and broken up into small pieces that no one is responsible. But no one is free either. Everyone has his own, specific task. And that’s all he has to do.

Just consider, for example, that atrocious excuse… It was one of the most horrible things I have ever heard. The person in charge of the concentration camp Bergen-Belsen was asked, during the Auschwitz trial… the Nuremberg trials regarding Auschwitz and Bergen-Belsen: “But didn’t find you it horrible? All those corpses?” He replied: “What could I do? The capacity of the ovens was too small. I couldn’t process all those corpses. It caused me many problems. I had no time to think about those people. I was too busy with that technical problem of my ovens.” That was the classic example of an irresponsible person. He carries out his technical task he’s not interested in anything else.


5. What is sacred in one society is not always sacred in another. But people have always respected sacred matters. And if there was a force which destroyed those sacred matters, those elements regarded as sacred in certain society, then this new force was revered and respected by the people. For it was clearly stronger. So there was a new thing that was more sacred than the old one. 

What is now so awful in our society is that technology has destroyed everything which people ever considered sacred. For example, nature. People have voluntarily moved to an acceptance of technology as something sacred. That is really awful. In the past, the sacred things always derived from nature. Currently, nature has been completely desecrated and we consider technology as something sacred. Think, for example, on the fuss whenever a demonstration is held. Everyone is then always very shocked if a car is set on fire. For then a sacred object is destroyed.


6. That is one the basic rules of technology. Without a doubt. Every technological step forward has its price. Human happiness has its price. We must always ask ourselves what price we have to pay for something. We only have to consider the following example. When Hitler came to power everyone considered the Germans mad. Nearly all the Germans supported him. Of course. He brought an end to unemployment. He improved the position of the mark. He created a surge in economic growth. How can a badly informed population, seeing all these economic miracles, be against him? They only had to ask the question: What will it cost us? What price do we have to pay for this economic progress, for the strong position of the mark and for employment? What will that cost us? Then they would have realized that the cost would be very high. But this is typical for modern society. Yet this question will always be asked in traditional societies. In such societies people ask: If by doing this I disturb the order of things what will be the cost for me?

Wisdom does not come from intellectual reflection. It is achieved in a long process of transfer from generation to generation. (It is) An accumulation of experiences in direct relationship with the natural social climate. Nature served as an example for us. We must divest ourselves of all that. For in a technological society traditional human wisdom is not taken seriously.


7. Technology also obliges us to live more and more quickly. Inner reflection is replaced by reflex. Reflection means that, after I have undergone an experience, I think about that experience. In the case of a reflex you know immediately what you must do in a certain situation. Without thinking. Technology requires us no longer to think about the things. If you are driving a car at 150 kilometers an hour and you think you’ll have an accident. Everything depends on reflexes. The only thing technology requires us is: Don’t think about it. Use your reflexes.


8. Technology will not tolerate any judgment being passed on it. Or rather: technologists do not easily tolerate people expressing an ethical or moral judgment on what they do. But the expression of ethical, moral and spiritual judgments is actually the highest freedom of mankind. So I am robbed of my highest freedom. So whatever I say about technology and the technologists themselves is of no importance to them. It won’t deter them from what they are doing. They are now set in their course. They are so conditioned. For a technologist is not free. He is conditioned. By his training, by his experiences and by the objective which he must reach. He is not free in the execution of his task. He does what technology demands of him. That’s why I think freedom and technology contradict one another.


9. Because of our technology, we now have a world in which the situation of mankind has totally changed. What I mean by that is: mankind in the technological world is prepared to give up his independence in exchange for all kinds of facilities and in exchange for consumer products and a certain security. In short, in exchange for a package of welfare provisions offered to him by society. As I was thinking about that I couldn’t help recalling the story in the Bible about Esau and the lentil broth. Esau, who is hungry, is prepared to give up the blessings and promise of God in exchange for some lentil broth. In the same way, modern people are prepared to give up their independence in exchange for some technological lentils. The point is simply that Esau made an extremely unfavorable exchange and that the person who gives up his position of independence lets himself be badly duped too, by the technological society. It boils down to the fact that he gives up his independence in exchange for a number of lies. He doesn’t realize that he is manipulated in his choice. That he is changed internally by advertisements, by the media and so on. And when you think that the manipulator, the author of advertisements or propaganda is himself manipulated, then you cannot point to one culprit as being responsible. It is neither the advertiser nor his poor public. We are all responsible, to the same extent.


10. Right from the start I have often been sharply criticized in the United States, for example, for allegedly being a Calvinist. And a Calvinist is pessimistic, and so on. But I’m not a Calvinist at all. They haven’t understood anything of my theology, but it doesn’t matter.

But what does matter is that pessimism in a society such as ours can only lead to suicide. That’s why you must be optimistic. You must spend your holiday in Disneyland. Then you are a real optimist. With all that you see there you no longer have to think about anything else. In other words, those who accuse me of pessimism are in fact saying to me: You prevent people from being able to sleep peacefully. So if you let everything to take its course, never interfere, and you just go to sleep peacefully, all will end well.

I would certainly not want my words to be too pessimistic and too inaccessible. And I would like to explain that people are still people a bit—notice I say a bit—and they still have human needs; and they can still feel love and pity, and feelings of friendship.

The question now is whether people are prepared or not to realize that they are dominated by technology. And to realize that technology oppresses them, forces them to undertake certain obligations and conditions them. Their freedom begins when they become conscious of these things. For when we become conscious of that which determines our life we attain the highest degree of freedom. I must make sure that I can analyze it just as I can analyze a stone or any other object, that I can analyze it and fathom it from all angles. As soon as I can break down this whole technological system into its smallest components my freedom begins. But I also know that, at the same time, I’m dominated by technology. So I don’t say, “I’m so strong that technology has no hold on me”. Of course technology has hold on me. I know that very well. Just take… a telephone, for example, which I use all the time. I’m continually benefiting from technology.

So we can ask ourselves whether there is really any sense in all this to be investigated. But the search for it cannot be a strictly intellectual activity. The search for sense implies that we must have a radical discussion of modern life. In order to rediscover a sense, we must discuss everything which has no sense. We are surrounded by objects which are, it is true, efficient but are absolutely pointless. A work of art, on the other hand, has sense in various ways or it calls up in me a feeling or an emotion whereby my life acquires sense. That is not the case with a technological product.

And on the other hand we have the obligation to rediscover certain fundamental truths which have disappeared because of technology. We can also call these truths values – important, actual values which ensure that people experience their lives as having sense. In other words, as soon as the moment arrives, when I think that the situation is really dangerous, I can’t do anymore with purely technological means. Then I must employ all my human and intellectual capacities and all my relationships with others to create a counterbalance. That means that when I think that a disaster threatens and that developments threaten to lead to a destiny for mankind, as I wrote concerning the development of technology, I, as a member of mankind, must resist and must refuse to accept that destiny. And at that moment we end up doing what mankind has always done at a moment when destiny threatens. Just think of all those Greek tragedies in which mankind stands up against the destiny and says: No, I want mankind to survive; and I want freedom to survive. 

At such a moment, you must continue to cherish hope, but not the hope that you will achieve a quick victory and even less the hope that we face an easy struggle. We must be convinced that we will carry on fulfilling our role as people. In fact, it is not an insuperable situation. There is no destiny that we cannot overcome. You must simply have valid reasons for joining in the struggle. You need a strong conviction. You must really want people to remain, ultimately, people. 

This struggle against the destiny of technology has been undertaken by us by means of small scale actions. We must continue with small groups of people who know one another. It will not be any big mass of people or any big unions or big political parties who will manage to stop this development. 

What I have just said doesn’t sound very efficient, of course. When we oppose things which are too efficient we mustn’t try to be even more efficient. For that will not turn out to be the most efficient way. 

But we must continue to hope that mankind will not die out and will go on passing on truths from generation to generation.

Link: Neil Postman on Cyberspace (1995)

Author and media scholar Neil Postman, head of the Culture and Communications at New York University, encourages caution when entering cyberspace. His book, Technopoly, the Surrender of Culture to Technology, puts the computer in historical perspective.

Neil Postman, thank you for joining us. How do you define cyberspace?

Cyberspace is a metaphorical idea which is supposed to be the space where your consciousness is located when you’re using computer technology on the Internet, for example, and I’m not entirely sure it’s such a useful term, but I think that’s what most people mean by it.

How does that strike you, I mean, that your consciousness is located somewhere other than in your body?

Well, the most interesting thing about the term for me is that it made me begin to think about where one’s consciousness is when interacting with other kinds of media, for example, even when you’re reading, where, where are you, what is the space in which your consciousness is located, and when you’re watching television, where, where are you, who are you, because people say with the Internet, for example, it’s a little different in that you’re always interacting or most of the time with another person. And when you’re in cyberspace, I suppose you can be anyone you want, and I think as this program indicates, it’s worth, it’s worth talking about because this is a new idea and something very different from face-to-face co-presence with another human being.

Do you think this is a good thing, or a bad thing, or you haven’t decided?

Well, no, I’ve mostly—(laughing)—I’ve mostly decided that new technology of this kind or any other kind is a kind of Faustian bargain. It always gives us something important but it also takes away something that’s important. That’s been true of the alphabet and the printing press and telegraphy right up through the computer. For instance, when I hear people talk about the information superhighway, it will become possible to shop at home and bank at home and get your texts at home and get entertainment at home and so on, I often wonder if this doesn’t signify the end of any meaningful community life. I mean, when two human beings get together, they’re co-present, there is built into it a certain responsibility we have for each other, and when people are co-present in family relationships and other relationships, that responsibility is there. You can’t just turn off a person. On the Internet, you can. And I wonder if this doesn’t diminish that built-in, human sense of responsibility we have for each other. Then also one wonders about social skills; that after all, talking to someone on the Internet is a different proposition from being in the same room with someone—not in terms of responsibility but just in terms of revealing who you are and discovering who the other person is. As a matter of fact, I’m one of the few people not only that you’re likely to interview but maybe ever meet who is opposed to the use of personal computers in school because school, it seems to me, has always largely been about how to learn as part of a group. School has never really been about individualized learning but about how to be socialized as a citizen and as a human being, so that we, we have important rules in school, always emphasizing the fact that one is part of a group. And I worry about the personal computer because it seems, once again to emphasize individualized learning, individualized activity.

What images come to your mind when you, when you think about what our lives will be like in cyberspace?

Well, the, the worst images are of people who are overloaded with information which they don’t know what to do with, have no sense of what is relevant and what is irrelevant, people who become information junkies.

What do you mean? How do you mean that?

Well, the problem in the 19th century with information was that we lived in a culture of information scarcity and so humanity addressed that problem beginning with photography and telegraphy and the–in the 1840s. We tried to solve the problem of overcoming the limitations of space, time, and form. And for about a hundred years, we worked on this problem, and we solved it in a spectacular way. And now, by solving that problem, we created a new problem, that people have never experienced before, information glut, information meaninglessness, information incoherence. I mean, if there are children starving in Somalia or any other place, it’s not because of insufficient information. And if crime is rampant in the streets in New York and Detroit and Chicago or wherever, it’s not because of insufficient information. And if people are getting divorced and mistreating their children and their sexism and racism are blights on our social life, none of that has anything to do with inadequate information. Now, along comes cyberspace and the information superhighway, and everyone seems to have the idea that, ah, here we can do it; if only we can have more access to more information faster and in more diverse forms at long last, we’ll be able to solve these problems. And I don’t think it has anything to do with it.

Do you believe that this–that the fact that people are more connected globally will lead to a greater degree of homogenization of the global society?

Here’s the puzzle about that, Charlayne. When everyone was–when McLuhan talked about the world becoming a global village and, and when people ask, as you did, about how connections can be made, everyone seemed to think that the world would become in, in some good sense more homogenous. But we seem to be experiencing the opposite. I mean, all over the world, we see a kind of reversion to tribalism. People are going back to their tribal roots in order to find a sense of identity. I mean, we see it in Russia, in Yugoslavia, in Canada, in the United States, I mean, in our own country. Why is that every group now not only is more aware of its own grievances but seems to want its own education? You know, we want an Afro-centric curriculum and a Korean-centric curriculum, and a Greek-centered curriculum. What is it about all this globalization of communication that is making people return to more–to smaller units of identity? It’s a puzzlement.

Well, what do you think the people, society should be doing to try and anticipate these negatives and be able to do something about them?

I think they should–everyone should be sensitive to certain questions. For example, when a new–confronted with a new technology, whether it’s a cellular phone or high definition television or cyberspace or Internet, the question–one question should be: What is the problem to which this technology is a solution? And the second question would be: Whose problem is it actually? And the third question would be: If there is a legitimate problem here that is solved by the technology, what other problems will be created by my using this technology? About six months ago, I bought a new Honda Accord, and the salesman told me that it had cruise control. And I asked him, “What is the problem to which cruise control is the solution?” By the way, there’s an extra charge for cruise control. And he said no one had ever asked him that before but then he said, “Well, it’s the problem of keeping your foot on the gas.” And I said, “Well, I’ve been driving for 35 years. I’ve never found that to be a problem.” I mean, am I using this technology, or is it using me, because in a technological culture, it is very easy to be swept up in the enthusiasm for technology, and of course, all the technophiles around, all the people who adore technology and are promoting it everywhere you turn.

Well, Neil Postman, thank you for all of your cautions.

Link: The Disconnectionists

“Unplugging” from the Internet isn’t about restoring the self so much as it about stifling the desire for autonomy that technology can inspire.

Once upon a pre-digital era, there existed a golden age of personal authenticity, a time before social-media profiles when we were more true to ourselves, when the sense of who we are was held firmly together by geographic space, physical reality, the visceral actuality of flesh. Without Klout-like metrics quantifying our worth, identity did not have to be oriented toward seeming successful or scheming for attention.

According to this popular fairytale, the Internet arrived and real conversation, interaction, identity slowly came to be displaced by the allure of the virtual — the simulated second life that uproots and disembodies the authentic self in favor of digital status-posturing, empty interaction, and addictive connection. This is supposedly the world we live in now, as a recent spate of popular books, essays, wellness guides, and viral content suggest. Yet they have hope: By casting off the virtual and re-embracing the tangible through disconnecting and undertaking a purifying “digital detox,” one can reconnect with the real, the meaningful — one’s true self that rejects social media’s seductive velvet cage.

That retelling may be a bit hyperbolic, but the cultural preoccupation is inescapable. How and when one looks at a glowing screen has generated its own pervasive popular discourse, with buzzwords like digital detox, disconnection, andunplugging to address profound concerns over who is still human, who is having true experiences, what is even “real” at all. A few examples: In 2013, Paul Miller of tech-news website The Verge and Baratunde Thurston, a Fast Company columnist, undertook highly publicized breaks from the Web that they described in intimate detail (and ultimately posted on the Web). Videos like “I Forgot My Phone” that depict smartphone users as mindless zombies missing out on reality have gone viral, and countless editorial writers feel compelled to moralize broadly about the minutia of when one checks their phone. But what they are saying may matter less than the fact that they feel required to say it. As Diane Lewis states in an essay for Flow, an online journal about new media,

The question of who adjudicates the distinction between fantasy and reality, and how, is perhaps at the crux of moral panics over immoderate media consumption.

It is worth asking why these self-appointed judges have emerged, why this moral preoccupation with immoderate digital connection is so popular, and how this mode of connection came to demand such assessment and confession, at such great length and detail. This concern-and-confess genre frames digital connection as something personally debasing, socially unnatural despite the rapidity with which it has been adopted. It’s depicted as a dangerous desire, an unhealthy pleasure, an addictive toxin to be regulated and medicated. That we’d be concerned with how to best use (or not use) a phone or a social service or any new technological development is of course to be expected, but the way the concern with digital connection has manifested itself in such profoundly heavy-handed ways suggests in the aggregate something more significant is happening, to make so many of us feel as though our integrity as humans has suddenly been placed at risk.

+++

The conflict between the self as social performance and the self as authentic expression of one’s inner truth has roots much deeper than social media. It has been a concern of much theorizing about modernity and, if you agree with these theories, a mostly unspoken preoccupation throughout modern culture.

Whether it’s Max Weber on rationalization, Walter Benjamin on aura, Jacques Ellul on technique, Jean Baudrillard on simulations, or Zygmunt Bauman and the Frankfurt School on modernity and the Enlightenment, there has been a long tradition of social theory linking the consequences of altering the “natural” world in the name of convenience, efficiency, comfort, and safety to draining reality of its truth or essence. We are increasingly asked to make various “bargains with modernity” (to use Anthony Giddens’s phrase) when encountering and depending on technologies we can’t fully comprehend. The globalization of countless cultural dispositions had replaced the pre-modern experience of cultural order with an anomic, driftless lack of understanding, as described by such classical sociologists as Émile Durkheim and Georg Simmel and in more contemporary accounts by David Riesman (The Lonely Crowd), Robert Putnam (Bowling Alone), and Sherry Turkle (Alone Together).

I drop all these names merely to suggest the depth of modern concern over technology replacing the real with something unnatural, the death of absolute truth, of God. This is especially the case in identity theory, much of which is founded on the tension between seeing the self as having some essential soul-like essence versus its being a product of social construction and scripted performance. From Martin Heidegger’s “they-self,” Charles Horton Cooley’s “looking glass self,” George Herbert Mead’s discussion of the “I” and the “me,”  Erving Goffman’s dramaturgical framework of self-presentation on the “front stage,” Michel Foucault’s “arts of existence,” to Judith Butler’s discussion of identity “performativity,” theories of the self and identity have long recognized the tension between the real and the pose. While so often attributed to social media, such status-posturing performance — “success theater” — is fundamental to the existence of identity.

These theories also share an understanding that people in Western society are generally uncomfortable admitting that who they are might be partly, or perhaps deeply, structured and performed. To be a “poser” is an insult; instead common wisdom is “be true to yourself,” which assumes there is a truth of your self. Digital-austerity discourse has tapped into this deep, subconscious modern tension, and brings to it the false hope that unplugging can bring catharsis.

The disconnectionists see the Internet as having normalized, perhaps even enforced, an unprecedented repression of the authentic self in favor of calculated avatar performance. If we could only pull ourselves away from screens and stop trading the real for the simulated, we would reconnect with our deeper truth. In describing his year away from the Internet, Paul Miller writes,

‘Real life,’ perhaps, was waiting for me on the other side of the web browser … It seemed then, in those first few months, that my hypothesis was right. The internet had held me back from my true self, the better Paul. I had pulled the plug and found the light.

Baratunde Thurston writes,

my first week sans social media was deeply, happily, and personally social […] I bought a new pair of glasses and shared my new face with the real people I spent time with.

Such rhetoric is common. Op-eds, magazine articles, news programs, and everyday discussion frames logging off as reclaiming real social interaction with your realself and other real people. The R in IRL. When the digital is misunderstood as exclusively “virtual,” then pushing back against the ubiquity of connection feels like a courageous re-embarking into the wilderness of reality. When identity performance can be regarded as a by-product of social media, then we have a new solution to the old problem of authenticity: just quit. Unplug — your humanity is at stake! Click-bait and self-congratulation in one logical flaw.

The degree to which inauthenticity seems a new, technological problem is the degree to which I can sell you an easy solution. Reducing the complexity of authenticity to something as simple as one’s degree of digital connection affords a solution the self-help industry can sell. Researcher Laura Portwood-Stacer describes this as that old “neoliberal responsibilization we’ve seen in so many other areas of ‘ethical consumption,’ ” turning social problems into personal ones with market solutions and fancy packaging.

Social media surely change identity performance. For one, it makes the process more explicit. The fate of having to live “onstage,” aware of being an object in others’ eyes rather than a special snowflake of spontaneous, uncalculated bursts of essential essence is more obvious than ever — even perhaps for those already highly conscious of such objectification. But that shouldn’t blind us to the fact that identity theater is older than Zuckerberg and doesn’t end when you log off. The most obvious problem with grasping at authenticity is that you’ll never catch it, which makes the social media confessional both inevitable as well as its own kind of predictable performance.

To his credit, Miller came to recognize by the end of his year away from the Internet that digital abstinence made him no more real than he always had been. Despite his great ascetic effort, he could not reach escape velocity from the Internet. Instead he found an “inextricable link” between life online and off, between flesh and data, imploding these digital dualisms into a new starting point that recognizes one is never entirely connected or disconnected but deeply both. Calling the digital performed and virtual to shore up the perceived reality of what is “offline” is one more strategy to renew the reification of old social categories like the self, gender, sexuality, race and other fictions made concrete. The more we argue that digital connection threatens the self, the more durable the concept of the self becomes.

+++

The obsession with authenticity has at its root a desire to delineate the “normal” and enforce a form of “healthy” founded in supposed truth. As such, it should be no surprise that digital-austerity discourse grows a thin layer of medical pathologization. That is, digital connection has become an illness. Not only has the American Psychiatric Association looked into making “Internet-use disorder” a DSM-official condition, but more influentially, the disconnectionists have framed unplugging as a health issue, touting the so-called digital detox. For example, so far in 2013, The Huffington Post has run 25 articles tagged with “digital detox,” including “The Amazing Discovery I Made When My Phone Died,” “How a Weekly Digital Detox Changed My Life,” “Why We’re So Hooked on Technology (And How to Unplug).” A Los Angeles Times article explored whether the presence of digital devices “contaminates the purity” of Burning Man. Digital detox has even been added to the Oxford Dictionary Online. Most famous, due to significant press coverage, is Camp Grounded, which bills itself as a “digital detox tech-free personal wellness retreat.” Atlantic senior editor Alexis Madrigal has called it “a pure distillation of post-modern technoanxiety.” On its grounds the camp bans not just electronic devices but also real names, real ages, and any talk about one’s work. Instead, the camp has laughing contests.

The wellness framework inherently pathologizes digital connection as contamination, something one must confess, carefully manage, or purify away entirely. Remembering Michel Foucault’s point that diagnosing what is ill is always equally about enforcing what is healthy, we might ask what new flavor of normal is being constructed by designating certain kinds of digital connection as a sickness. Similar to madness, delinquency, sexuality, or any of the other areas whose pathologizing toward normalization Foucault traced, digitality — what is “online,” and how should one appropriately engage that distinction — has become a productive concept around which to organize the control and management of new desires and pleasures. The desire to be heard, seen, informed via digital connection in all its pleasurable and distressing, dangerous and exciting ways comes to be framed as unhealthy, requiring internal and external policing. Both the real/virtual and toxic/healthy dichotomies of digital austerity discourse point toward a new type of organization and regulation of pleasure, a new imposition of personal techno-responsibility, especially on those who lack autonomy over how and when to use technology. It’s no accident that the focus in the viral “I Forgot My Phone” video wasn’t on the many people distracted by seductive digital information but the woman who forgets her phone, who is “free” to experience life — the healthy one is the object of control, not the zombies bitten by digitality.

The smartphone is a machine, but it is still deeply part of a network of blood; an embodied, intimate, fleshy portal that penetrates into one’s mind, into endless information, into other people. These stimulation machines produce a dense nexus of desires that is inherently threatening. Desire and pleasure always contain some possibility (a possibility — it’s by no means automatic or even likely) of disrupting the status quo. So there is always much at stake in their control, in attempts to funnel this desire away from progressive ends and toward reinforcing the values that support what already exists. Silicon Valley has made the term “disruption” a joke, but there is little disagreement that the eruption of digitality does create new possibilities, for better or worse. Touting the virtue of austerity puts digital desire to work strictly in maintaining traditional understandings of what is natural, human, real, healthy, normal. The disconnectionists establish a new set of taboos as a way to garner distinction at the expense of others, setting their authentic resistance against others’ unhealthy and inauthentic being.

This explains the abundance of confessions about social media compulsion that intimately detail when and how one connects. Desire can only be regulated if it is spoken about. To neutralize a desire, it must be made into a moral problem we are constantly aware of: Is it okay to look at a screen here? For how long? How bright can it be? How often can I look? Our orientation to digital connection needs to become a minor personal obsession. The true narcissism of social media isn’t self-love but instead our collective preoccupation with regulating these rituals of connectivity. Digital austerity is a police officer downloaded into our heads, making us always self-aware of our personal relationship to digital desire.

Of course, digital devices shouldn’t be excused from the moral order — nothing should or could be. But too often discussions about technology use are conducted in bad faith, particularly when the detoxers and disconnectionists and digital-etiquette-police seem more interested in discussing the trivial differences of when and how one looks at the screen rather than the larger moral quandaries of what one is doing with the screen. But the disconnectionists’ selfie-help has little to do with technology and more to do with enforcing a traditional vision of the natural, healthy, and normal. Disconnect. Take breaks. Unplug all you want. You’ll have different experiences and enjoy them, but you won’t be any more healthy or real.