Sunshine Recorder

Link: The Cultural History of Pain

Speculation about the degree to which human beings and animals experienced pain has a long history.

On 16 April 1872, a woman signing herself “An Earnest Eng­lishwoman” published a letter in the Times. It was entitled “Are Women Animals?”.

She was clearly very angry. Her fury had been fuelled by recent court cases in which a man who had “coolly knocked out” the eye of his mistress and another man who had killed his wife were imprisoned for just a few months each. In contrast, a man who had stolen a watch was punished severely, sentenced to not only seven years’ penal servitude, but also 40 lashes of the “cat”. She noted that although some people might believe that a watch was an “object of greater value than the eye of a mistress or the life of a wife”, she was asking readers to remember that “the inanimate watch does not suffer”. It must cause acute agony for any “living creature, endowed with nerves and muscles, to be blinded or crushed to death”.

Indeed, she continued, she had “read of heavier sentences being inflicted for cruelty towards that – may I venture to say? – lower creation”. She pleaded for women to be subsumed under legislation forbidding cruelty to animals, because that would improve their position in law.

Speculation about the degree to which human beings and animals experienced pain has a long history, but “An Earnest Englishwoman” was writing at a very important time in these debates. Charles Darwin’s Descent of Man had been published the year before her letter, and his Expression of the Emotions in Man and Animals appeared in 1872. Both Darwin and “An Earnest Englishwoman” were addressing a central question that had intrigued theologians, scientists, philosophers, psychologists and other social commentators for centuries: how can we know how other people feel?

The reason this question was so important was that many people didn’t believe that all human beings (let alone non-human animals) were equally capable of suffering. Scientists and philosophers pointed to the existence of a hierarchy of sentience. Belief in a great “Chain of Being”, according to which everything in the universe was ranked from the highest to the lowest, is a fundamental tenet of western philosophy. One aspect of this Chain of Being involved the perception of sensation. There was a parallel great Chain of Feeling, which placed male Europeans at one end and slaves and animals at the other.

Of course, “An Earnest Englishwoman” was using satire to argue for greater rights for women. She was not accusing men of failing to acknowledge that women were capable of experiencing pain. Indeed, that much-maligned group of Victorian women – hysterics – was believed to be exquisitely sensitive to noxious stimuli. Rather, she was drawing attention to the way a lack of respect for the suffering of some people had a profound impact on their status in society. If the suffering of women were treated as seriously as the suffering of animals, she insisted, women’s lives would be better.

Although she does not discuss it in her short letter, the relationship between social status and perceptions of sentience was much more fraught for other groups within British and American societies. In particular, people who had been placed at the “lower” end of the Chain of Feeling paid an extremely high price for prejudices about their “inability” to feel. In many white middle-class and upper-class circles, slaves and “savages”, for instance, were routinely depicted as possessing a limited capacity to experience pain, a biological “fact” that conveniently diminished any culpability among their so-called superiors for acts of abuse inflicted on them. Although the author of Practical Rules for the Management and Medical Treatment of Negro Slaves, in the Sugar Colonies (1811) conceded that “the knife of the anatomist … has never been able to detect” anatomical differences between slaves and their white masters, he nevertheless contended that slaves were better “able to endure, with few expressions of pain, the accidents of nature”. This was providential indeed, given that they were subjected to so many “accidents of nature” while labouring on sugar-cane plantations.

Such beliefs were an important factor in imperial conquests. With voyeuristic curiosity, travellers and explorers often commented on what they regarded as exotic responses to pain by indigenous peoples. In Australia, newly arrived colonisers breathlessly maintained that Native Australians’ “endurance of pain” was “something marvellous”. Others used the theme as an excuse for mockery. For instance, the ability of New Zealand Maoris to bear pain was ascribed to their “vanity”. They were said to be so enamoured with European shoes that “when one of them was happy enough to become the possessor of a pair, and found that they were too small, he would not hesitate to chop off a toe or two, stanch the bleeding by covering the stump with a little hemp, and then force the feet [sic] into the boots”.

But what was it about the non-European body that allegedly rendered it less suscep­tible to painful stimuli? Racial sciences placed great emphasis on the development and complexity of the brain and nerves. As the author of Pain and Sympathy (1907) concluded, attempting to explain why the “savage” could “bear physical torture without shrinking”: the “higher the life, the keener is the sense of pain”.

There was also speculation that the civilising process itself had rendered European peoples more sensitive to pain. The cele­brated American neurologist Silas Weir Mitchell stated in 1892 that in the “process of being civilised we have won … intensified capacity to suffer”. After all, “the savage does not feel pain as we do: nor as we examine the descending scale of life do animals seem to have the acuteness of pain-sense at which we have arrived”.

Some speculated whether the availability of anaesthetics and analgesics had an effect on people’s ability (as well as willingness) to cope with acute affliction. Writing in the 1930s, the distinguished pain surgeon René Leriche argued fervently that Europeans had become more sensitive to pain. Unlike earlier in the century, he claimed, modern patients “would not have allowed us to cut even a centimetre … without administering an anaesthetic”. This was not due to any decline of moral fibre, Leriche added: rather, it was a sign of a “nervous system differently developed, and more sensitive”.

Other physicians and scientists of the 19th and early 20th centuries wanted to complicate the picture by making a distinction between pain perception and pain reaction. But this distinction was used to denigrate “outsider” groups even further. Their alleged insensitivity to pain was proof of their humble status – yet when they did exhibit pain reactions, their sensitivity was called “exaggerated” or “hysterical” and therefore seen as more evidence of their inferiority. Such confused judgements surfaced even in clinical literature that purported to repudiate value judgements. For instance, John Finney was the first president of the American College of Surgeons. In his influential book The Significance and Effect of Pain (1914), he amiably claimed:

It does not always follow that because a patient bears what appears to be a great amount of pain with remarkable fortitude, that that individual is more deserving of credit or shows greater self-control than the one who does not; for it is a well-established fact that pain is not felt to the same degree by all individuals alike.

However, in the very same section, Finney made pejorative statements about people with a low pain threshold (they possessed a “yellow streak”, he said) and insisted that patients capable of bearing pain showed “wonderful fortitude”.

In other words, civilised, white, professional men might be exquisitely sensitive to pain but, through acts of willpower, they were capable of masking their reaction. In contrast, Finney said, the dark-skinned and the uneducated might bear “a great amount of pain with remarkable fortitude” but they did not necessarily deserve credit for it.

It was acknowledged that feeling pain was influenced by emotional and psychological states. The influence of “mental factors” on the perception of pain had been observed for centuries, especially in the context of religious torture. Agitation, ecstasy and ideological fervour were known to diminish (or even eliminate) suffering.

This peculiar aspect of pain had been explored most thoroughly in war. Military lore held that the “high excitement” of combat lessened the pain of being wounded. Even Lucretius described how when

the scythed chariots, reeking with indiscriminate slaughter, suddenly chop off the limbs … such is the quickness of the injury and the eagerness of the man’s mind that he cannot feel the pain; and because his mind is given over to the zest of battle, maimed though he be, he plunges afresh into the fray and the slaughter.

Time and again, military observers have noted how, in the heat of battle, wounded men might not feel even severe wounds. These anecdotal observations were confirmed by a systematic study carried out during the Second World War. The American physician Henry K Beecher served in combat zones on the Venafro and Cassino fronts in Italy. He was struck by how there was no necessary correlation between the seriousness of any specific wound and the men’s expressions of suffering: perhaps, he concluded, the strong emotions aroused in combat were responsible for the absence of acute pain – or the pain might also be alleviated by the knowledge that wartime wounding would release a soldier from an exceedingly dangerous environment.

Beecher’s findings were profoundly influential. As the pain researchers Harold Wolff and Stewart Wolf found in the 1950s, most people perceived pain at roughly similar intensities, but their threshold for reaction varied widely: it “depends in part upon what the sensation means to the individual in the light of his past experiences”.

Away from the battlefield, debates about the relative sensitivity of various people were not merely academic. The seriousness of suffering was calibrated according to such characterisations. Sympathy was rationed unevenly.

Myths about the lower susceptibility of certain patients to painful stimuli justified physicians prescribing fewer and less effective analgesics and anaesthetics. This was demonstrated by the historian Martin Pernick in his work on mid-19th-century hospitals. In A Calculus of Suffering (1985), Pernick showed that one-third of all major limb amputations at the Pennsylvania Hospital between 1853 and 1862 had been done without any anaesthetic, even though it was available. Distinguished surgeons such as Frank Hamilton carried out more than one-sixth of all non-military amputations on fully conscious patients.

This is not simply peculiar to earlier centuries. For instance, the belief that infants were not especially liable to experiencing pain (or that indications of suffering were merely reflexes) was prominent for much of the 20th century and had profound effects on their treatment. Painful procedures were routinely carried out with little, if any, anaesthetic or analgesic. Max Thorek, the author of Modern Surgical Technique (1938), claimed that “often no anaesthetic is required”, when operating on young infants: indeed, “a sucker consisting of a sponge dipped in some sugar water will often suffice to calm the baby”.

As “An Earnest Englishwoman” recognised, beliefs about sentience were linked to ideas of who was considered fully human. Slaves, minority groups, the poor and others in society could also be dispossessed politically, economically and socially on the grounds that they did not feel as much as others. The “Earnest Englishwoman’s” appeal – which drew from a tradition of respect and consideration that lays emphasis on the capacity to suffer – is one that has been echoed by the oppressed and their supporters throughout the centuries.

Link: Nostalgia

Adaptation and elaboration from Svetlana Boym, The Future of Nostalgia, Basic Books, New York 2001.

The word “nostalgia” comes from two Greek roots: νόστος, nóstos (“return home”) and ἄλγος, álgos (“longing”). I would define it as a longing for a home that no longer exists or has never existed. Nostalgia is a sentiment of loss and displacement, but it is also a romance with one’s own phantasy. Nostalgic love can only survive in a long-distance relationship. A cinematic image of nostalgia is a double exposure, or a superimposition of two images—of home and abroad, of past and present, of dream and everyday life. The moment we try to force it into a single image, it breaks the frame or burns the surface.

In spite of its Greek roots, the word “nostalgia” did not originate in ancient Greece. “Nostalgia” is only pseudo-Greek, or nostalgically Greek. The word was coined by the ambitious Swiss student Johannes Hofer in his medical dissertation in 1688.1 (Hofer also suggested monomania and philopatridomania to describe the same symptoms; luckily, the latter failed to enter common parlance.) It would not occur to us to demand a prescription for nostalgia. Yet in the 17th century, nostalgia was considered to be a curable disease, akin to a severe common cold. Swiss doctors believed that opium, leeches, and a journey to the Swiss Alps would take care of nostalgic symptoms. By the end of the 18th century, doctors discovered that a return home did not always cure the nostalgics—sometimes it killed them (especially when patriotic doctors misdiagnosed tuberculosis as nostalgia). Just as today genetic researchers hope to identify genes coding for medical conditions, social behavior, and even sexual orientation, so the doctors in the 18th and 19th centuries looked for a single cause, for one “pathological bone.” Yet they failed to find the locus of nostalgia in their patient’s mind or body. One doctor claimed that nostalgia was a “hypochondria of the heart,” which thrives on its symptoms. From a treatable sickness, nostalgia turned into an incurable disease. A provincial ailment, a maladie du pays, turned into a disease of the modern age, a mal du siècle.

The nostalgia that interests me here is not merely an individual sickness but a symptom of our age, a historical emotion. Hence I will make three crucial points. First, nostalgia in my diagnosis is not “antimodern.” It is not necessarily opposed to modernity but coeval with it. Nostalgia and progress are like Jekyll and Hyde: doubles and mirror images of one another. Nostalgia is not merely an expression of local longing, but the result of a new understanding of time and space that made the division into “local” and “universal” possible.

Secondly, nostalgia appears to be a longing for a place but is actually a yearning for a different time—the time of our childhood, the slower rhythms of our dreams. In a broader sense, nostalgia is a rebellion against the modern idea of time, the time of history and progress. The nostalgic desires to obliterate history and turn it into private or collective mythology, to revisit time as space, refusing to surrender to the irreversibility of time that plagues the human condition. Hence the “past of nostalgia,” to paraphrase Faulkner, is not “even the past.” It could merely be another time, or slower time. Time out of time, not encumbered by appointment books.

Thirdly, nostalgia, in my view, is not always retrospective; it can be prospective as well. The fantasies of the past determined by the needs of the present have a direct impact on the realities of the future. Considering the future makes us take responsibility for our nostalgic tales. Unlike melancholia, which confines itself to the planes of individual consciousness, nostalgia is about the relationship between individual biography and the biography of groups or nations, between personal and collective memory. While futuristic utopias might be out of fashion, nostalgia itself has a utopian dimension, only it is no longer directed toward the future. Sometimes it is not directed toward the past either, but rather sideways. The nostalgic feels stifled within the conventional confines of time and space.

In fact, there is a tradition of critical reflection on the modern condition that incorporates nostalgia. It can be called “off-modern.” The adverb “off” confuses our sense of direction; it makes us explore side-shadows and back alleys rather than the straight road of progress; it allows us to take a detour from the deterministic narrative of 20th‑century history. Off-modernism offered a critique of both the modern fascination with newness, and the no less modern reinvention of tradition. In the off-modern tradition, reflection and longing, estrangement and affection go together.

Modern nostalgia is paradoxical in the sense that the universality of longing can make us more empathetic toward fellow humans, yet the moment we try to repair “longing” with a particular “belonging”—the apprehension of loss with a rediscovery of identity and especially of a national community and a unique and pure homeland—we often part ways and put an end to mutual understanding. Álgos (longing) is what we share, yet nóstos (the return home) is what divides us. It is the promise to rebuild the ideal home that lies at the core of many powerful ideologies of today, tempting us to relinquish critical thinking for emotional bonding. The danger of nostalgia is that it tends to confuse the actual home with an imaginary one. In extreme cases, it can create a phantom homeland, for the sake of which one is ready to die or kill. Unelected nostalgia breeds monsters. Yet the sentiment itself, the mourning of displacement and temporal irreversibility, is at the very core of the modern condition.

Outbreaks of nostalgia often follow revolutions: the French Revolution of 1789, the Russian revolution, and the recent “velvet” revolutions in Eastern Europe were accompanied by political and cultural manifestations of longing. In France it is not only the ancient régime that produced revolution, but in some respect the revolution produced the ancien régime, giving it a shape, a sense of closure, and a gilded aura. Similarly, the revolutionary epoch of perestroika and the end of the Soviet Union produced an image of the last Soviet decades as the time of stagnation or, alternatively, as a Soviet Golden Age of stability, national strength, and “normalcy.” Yet the nostalgia that I explore here is not always for the ancient régime, stable superpower, or the fallen empire, but also for the unrealized dreams of the past and visions of the future that became obsolete. The history of nostalgia might allow us to look back at modern history as a search not only for newness and technological progress, but also for unrealized possibilities, unpredictable turns and crossroads.

The most common currency of the globalism exported all over the world is money and popular culture. Nostalgia too is a feature of global culture, but it demands a different currency. After all, the key words defining globalism—“progress,” “modernity,” and “virtual reality”—were invented by poets and philosophers: “progress” was coined by Immanuel Kant; the noun “modernity” is a creation of Charles Baudelaire; and “virtual reality” was first imagined by Henri Bergson, not Bill Gates. Only in Bergson’s definition, “virtual reality” referred to planes of consciousness, potential dimensions of time and creativity that are distinctly and inimitably human. As far as nostalgia is concerned, having failed to uncover its exact locus, 18th‑century doctors recommended seeking help from poets and philosophers. Nostalgia speaks in riddles and puzzles, trespassing across the boundaries between disciplines and national territories. So one has to face it in order not to become its next victim—or the next victimizer.

Instead of a magic cure for nostalgia, I will offer a tentative typology and distinguish between two main types of nostalgia: the restorative and the reflective. Restorative nostalgia stresses nóstos (home) and attempts a transhistorical reconstruction of the lost home. Reflective nostalgia thrives in álgos, the longing itself, and delays the homecoming—wistfully, ironically, desperately. These distinctions are not absolute binaries, and one can surely make a more refined mapping of the gray areas on the outskirts of imaginary homelands. Restorative nostalgia does not think of itself as nostalgia, but rather as truth and tradition. Reflective nostalgia dwells on the ambivalences of human longing and belonging and does not shy away from the contradictions of modernity. Restorative nostalgia protects the absolute truth, while reflective nostalgia calls it into doubt.

Restorative nostalgia is at the core of recent national and religious revivals. It knows two main plots—the return to origins and the conspiracy. Reflective nostalgia does not follow a single plot but explores ways of inhabiting many places at once and imagining different time zones. It loves details, not symbols. At best, it can present an ethical and creative challenge, not merely a pretext for midnight melancholies. If restorative nostalgia ends up reconstructing emblems and rituals of home and homeland in an attempt to conquer and specialize time, reflective nostalgia cherishes shattered fragments of memory and demoralizes space. Restorative nostalgia takes itself dead seriously. Reflective nostalgia, on the other hand, can be ironic and humorous. It reveals that longing and critical thinking are not opposed to one another, just as affective memories do not absolve one from compassion, judgment, or critical reflection.

The 20th century began with a futuristic utopia and ended with nostalgia. The optimistic belief in the future has become outmoded while nostalgia, for better or for worse, never went out of fashion, remaining uncannily contemporary.2 Contrary to what the great actress Simone Signore—who entitled her autobiography Nostalgia Is Not What It Used to Be—thought, the structure of nostalgia is in many respects what it used to be, in spite of changing fashions and advances in digital technologyIn the end, the only antidote for the dictatorship of nostalgia might be nostalgic dissidence. Nostalgia can be a poetic creation, an individual mechanism of survival, a countercultural practice, a poison, and a cure. It is up to us to take responsibility of our nostalgia and not let others “prefabricate” it for us. The prepackaged “usable past” may be of no use to us if we want to cocreate our future. Perhaps dreams of imagined homelands cannot and should not come to life. Sometimes it is preferable (at least in the view of this nostalgic author) to leave dreams alone, let them be no more and no less than dreams, not guidelines for the futureWhile restorative nostalgia returns and rebuilds one’s homeland with paranoid determination, reflective nostalgia fears return with the same passion. Home, after all, is not a gated community. Paradise on earth might turn out to be another Potemkin village with no exit. The imperative of a contemporary nostalgic: to be homesick and to be sick of being at home—occasionally at the same time.

Link: Technology and Consumership

Today’s media, combined with the latest portable devices, have pushed serious public discourse into the background and hauled triviality to the fore, according to media theorist Arthur W Hunt. And the Jeffersonian notion of citizenship has given way to modern consumership.

Almantas Samalavicius: In your recently published book Surviving Technopolis, you discuss a number of important and overlapping issues that threaten the future of societies. One of the central themes you explore is the rise, dominance and consequences of visual imagery in public discourse, which you say undermines a more literate culture of the past. This tendency has been outlined and questioned by a large and growing number of social thinkers (Marshall McLuhan, Walter Ong, Jacques Ellul, Ivan Illich, Neil Postman and others). What do you see as most culturally threatening in this shift to visual imagery?

Arthur W. Hunt III: The shift is technological and moral. The two are related, as Ellul has pointed out. Computer-based digital images stem from an evolution of other technologies beginning with telegraphy and photography, both appearing in the middle of the nineteenth century. Telegraphy trivialized information by allowing it to come to us from anywhere and in greater volumes. Photography de-contextualized information by giving us an abundance of pictures disassociated from the objects from which they came. Cinema magnified Aristotle’s notion of spectacle, which he claimed to be the least artistic element in Poetics. Spectacle in modern film tends to diminish all other elements of drama (plot, character, dialogue and so on) in favour of the exploding Capitol building. Radio put the voice of both the President and the Lone Ranger into our living rooms. Television was the natural and powerful usurper of radio and quickly became the nucleus of the home, a station occupied by the hearth for thousands of years. Then the television split in two, three or four ways so that every house member had a set in his or her bedroom. What followed was the personal computer at both home and at work. Today we have portable computers in which we watch shows, play games, email each other and gaze at ourselves like we used to look at Hollywood stars. To a large extent, these technologies are simply extensions of our technological society. They act as Sirens of distraction. They push serious public discourse into the background and pull triviality to the foreground. They move us away from the Jeffersonian notion of citizenship, replacing it with modern capitalism’s ethic of materialistic desire or “consumership”. The great danger of all this, of course, is that we neglect the polis and, instead, waste our time with bread and circuses. Accompanying this neglect is the creation of people who spend years in school yet remain illiterate, at least by the standards we used to hold out for a literate person. The trivialization spreads out into other institutions, as Postman has argued, to schools, churches and politics. This may be an American phenomenon, but many countries look to America’s institutions for guidance.

AS: Philosopher and historian Ivan Illich – one of the most radical critics of modernity and its mythology – has emphasized the conceptual difference between tools, on one hand, and technology on the other, implying that the dominance and overuse of technology is socially and culturally debilitating. Economist E.F. Schumacher urged us to rediscover the beauty of smallness and the use of more humane, “intermediate technologies”. However, a chorus of voices seems to sink in the ocean of popular technological optimism and a stubborn self-generating belief in the power of progress. Your critique contains no call to go back to the Middle Ages. Nor do you suggest that we give anything away to technological advances. Rather, you offer a sound and balanced argument about the misuses of technology and the mindscape that sacrifices tradition and human relationships on the altar of progress. Do you see any possibility of developing a more balanced approach to the role of technology in our culture? Obviously, many are aware, even if cynically, that technological progress has its downsides, but what of its upsides?

AWH: Short of a nuclear holocaust, we will not be going back to the Middle Ages any time soon. Electricity and automobiles are here to stay. The idea is not to be anti-technology. Neil Postman once said to be anti-technology is like being anti-food. Technologies are extensions of our bodies, and therefore scale, ecological impact and human flourishing becomes the yardstick for technological wisdom. The conventional wisdom of modern progress favours bigger, faster, newer and more. Large corporations see their purpose on earth to maximize profits. Their goal is to get us addicted to their addictions. We can no longer afford this kind of wisdom, which is not wisdom at all, but foolishness. We need to bolster a conversation about the human benefits of smaller, slower, older and less. Europeans often understand this better than Americans, that is, they are more conscious of preserving living spaces that are functional, aesthetically pleasing and that foster human interaction. E.F. Schumacher gave us some useful phraseology to promote an economy of human scale: “small is beautiful,” “technologies with a human face” and “homecomers.” He pointed out that “labour-saving machinery” is a paradoxical term, not only because it makes us unemployed, but also because it diminishes the value of work. Our goal should be to move toward a “third-way” economic model, one of self-sufficient regions, local economies of scale, thriving community life, cooperatives, family owned farms and shops, economic integration between the countryside and the nearby city, and a general revival of craftsmanship. Green technologies – solar and wind power for example – actually can help us achieve this third way, which is actually a kind of micro-capitalism.

AS: Technologies developed by humans (e.g. television) continue to shape and sustain a culture of consumerism, which has now become a global phenomenon. As you insightfully observe in one of your essays, McLuhan, who was often misinterpreted and misunderstood as a social theorist hailed by the television media he explored in a great depth, was fully aware of its ill effects on the human personality and he therefore limited his children’s TV viewing. Jerry Mander has argued for the elimination of television altogether, nevertheless, this medium is alive and kicking and continues to promote an ideology of consumption and, what is perhaps most alarming, successfully conditioning children to become voracious consumers in a society where the roles of parents become more and more institutionally limited. Do you have any hopes for this situation? Can one expect that people will develop a more critical attitude toward these instruments, which shape them as consumers? Does social criticism of these trends play any role in an environment where the media and the virtual worlds of the entertainment industry have become so powerful?

AWH: Modern habits of consumption have created what Benjamin Barber calls an “ethos of infantilization”, where children are psychologically manipulated into early adulthood and adults are conditioned to remain in a perpetual state of adolescence. Postman suggested essentially the same thing when he wroteThe Disappearance of Childhood. There have been many books written that address the problems of electronic media in stunting a child’s mental, physical and spiritual development. One of the better recent ones is Richard Louv’s Last Child in the Woods. Another one is Anthony Esolen’s Ten Ways to Destroy the Imagination of Your Child. We have plenty of books, but we don’t have enough people reading them or putting them into practice. Raising a child today is a daunting business, and maybe this is why more people are refusing to do it. No wonder John Bakan, a law professor at the University of British Columbia, wrote a New York Times op-ed complaining, “There is reason to believe that childhood itself is now in crisis.” The other day I was listening to the American television program 60 Minutes. The reporter was interviewing the Australian actress Cate Blanchett. I almost fell out of my chair when she starkly told the reporter, “We don’t outsource our children.” What she meant was, she does not let someone else raise her children. I think she was on to something. In most families today, both parents work outside the home. This is a fairly recent development if you consider the entire span of human history. Industrialism brought an end to the family as an economic unit. First, the father went off to work in the factory. Then, the mother entered the workforce during the last century. Well, the children could not stay home alone, so they were outsourced to various surrogate institutions. What was once provided by the home economy (oikos) – education, heath care, child rearing and care of the elderly – came to be provided by the state. The rest of our needs – food, clothing, shelter and entertainment – came to be provided by the corporations. A third-way economic ordering would seek to revive the old notion of oikos so that the home can once again be a legitimate economic, educational and care-providing unit – not just a place to watch TV and sleep. In other words, the home would once again become a centre for production, not just consumption. If this every happened, one or both parents would be at home and little Johnny and sister Jane would work and play alongside their parents.

AS: I was intrigued by your insight into forms of totalitarianism depicted by George Orwell and Aldous Huxley. Though most authors who discussed totalitarianism during the last half of the century were overtaken by the Orwellian vision and praised this as most enlightening, the alternative Huxleyan vision of a self-inflicted, joyful and entertaining totalitarian society was far less scrutinized. Do you think we are entering into a culture where “totalitarianism with a happy face” as you call it prevails? If so, what consequences you foresee?

AWH: It is interesting to note that Orwell thought Huxley’s Brave New Worldwas implausible because he maintained that hedonistic societies do not last long, and that they are too boring. However, both authors were addressing what many other intellectuals were debating during the 1930s: what would be the social implications of Darwin and Freud? What ideology would eclipse Christianity? Would the new social sciences be embraced with as much exuberance as the hard sciences? What would happen if managerial science were infused into all aspects of life? What should we make of wartime propaganda? What would be the long-term effects of modern advertising? What would happen to the traditional family? How could class divisions be resolved? How would new technologies shape the future?

I happen to believe there are actually more similarities between the Orwell’s 1984 and Huxley’s Brave New World than there are differences. Both novels have as their backstory the dilemma of living with weapons of mass destruction. The novel 1984 imagines what would happen if Hitler succeeded. In Brave New World, the world is at a crossroads. What is it to be, the annihilation of the human race or world peace through sociological control? In the end, the world chooses a highly efficient authoritarian state, which keeps the masses pacified by maintaining a culture of consumption and pleasure. In both novels, the past is wiped away from public memory. In Orwell’s novel, whoever “controls the past controls the future.” In Huxley’s novel, the past has been declared barbaric. All books published before A.F. 150 (that is, 150 years after 1908 CE, the year the first Model T rolled off the assembly line) are suppressed. Mustapha Mond, the Resident Controller in Brave New World, declares the wisdom of Ford: “History is bunk.” In both novels, the traditional family has been radically altered. Orwell draws from Hitler Youth and the Soviets Young Pioneers to give us a society where the child’s loyalty to the state far outweighs any loyalty to parents. Huxley gives us a novel where the biological family does not even exist. Any familial affection is looked down upon. Everybody belongs to everybody, sexually and otherwise. Both novels give us worlds where rational thought is suppressed so that “war is peace”, “freedom is slavery” and “ignorance is strength” (1984). InBrave New World, when Lenina is challenged by Marx to think for herself, all she can say is “I don’t understand.” The heroes in both novels are malcontents who want to escape this irrationality but end up excluded from society as misfits. Both novels perceive humans as religious beings where the state recognizes this truth but channels these inclinations toward patriotic devotion. In1984, Big Brother is worshipped. In Brave New World, the Christian cross has been cut off at the top to form the letter “T” for Technology. When engaged in the Orgy-Porgy, everyone in the room chants, “Ford, Ford, Ford.” In both novels an elite ruling class controls the populace by means of sophisticated technologies. Both novels show us surveillance states where the people are constantly monitored. Sound familiar? Certainly, as Postman tells us in his foreword to Amusing Ourselves to Death, Huxley’s vision eerily captures our culture of consumption. But how long would it take for a society to move from a happy faced totalitarianism to one that has a mask of tragedy?

AS: Your comments on the necessity of the third way in our societies subjected to and affected by economic globalization seem to resonate with the ideas of many social thinkers I interviewed for this series. Many outstanding social critics and thinkers seem to agree that the notions of communism and capitalism have become stale and meaningless; further development of these paradigms lead us nowhere. One of your essays focuses on the old concept of “shire” and household economics. Do you believe in what Mumford called “the useful past”? And do you expect the growing movement that might be referred to as “new economics” to enter the mainstream of our economic thinking, eventually leading to changes in our social habits?

AWH: If the third way economic model ever took hold, I suppose it could happen in several ways. We will start with the most desirable way, and then move to less desirable. The most peaceful way for this to happen is for people to come to some kind of realization that the global economy is not benefiting them and start desiring something else. People will see that their personal wages have been stagnant for too long, that they are working too hard with nothing to show for it, that something has to be done about the black hole of debt, and that they feel like pawns in an incomprehensible game of chess. Politicians will hear their cries and institute policies that would allow for local economies, communities and families to flourish. This scenario is less likely to happen, because the multinationals that help fund the campaigns of politicians will not allow it. I am primarily thinking of the American reality in my claim here. Unless corporations have a change of mind, something akin to a religious conversion, we will not see them open their hearts and give away their power.

A more likely scenario is that a grassroots movement led by creative innovators begins to experiment with new forms of community that serve to repair the moral and aesthetic imagination distorted by modern society. Philosopher Alasdair MacIntyre calls this the “Benedict Option” in his book After Virtue. Morris Berman’s The Twilight of American Culture essentially calls for the same solution. Inspired by the monasteries that preserved western culture in Europe during the Dark Ages, these communities would serve as models for others who are dissatisfied with the broken dreams associated with modern life. These would not be utopian communities, but humble efforts of trial and error, and hopefully diverse according to the outlook of those who live in them. The last scenario would be to have some great crisis occur – political, economic, or natural in origin – that would thrust upon us the necessity reordering our institutions. My father, who is in his nineties, often reminisces to me about the Great Depression. Although it was a miserable time, he speaks of it as the happiest time in his life. His best stories are about neighbours who loved and cared for each other, garden plots and favourite fishing holes. For any third way to work, a memory of the past will become very useful even if it sounds like literature. From a practical point of view, however, the kinds of knowledge that we will have to remember will include how to build a solid house, how to plant a vegetable garden, how to butcher a hog and how to craft a piece of furniture. In rural Tennessee where I live, there are people still around who know how to do these things, but they are a dying breed.

AS: The long (almost half-century) period of the Cold War has resulted in many social effects. The horrors of Communist regimes and the futility of state-planned economics, as well as the treason of western intellectuals who remained blind to the practice of Communist powers and eschewed ideas of idealized Communism, have aided the ideology of capitalism and consumerism. Capitalism came to be associated with ideas of freedom, free enterprise, freedom to choose and so on. How is this legacy burdening us in the current climate of economic globalization? Do you think recent crises and new social movements have the potential to shape a more critical view (and revision) of capitalism and especially its most ugly neo-liberal shape?

AWH: Here in America liberals want to hold on to their utopian visions of progress amidst the growing evidence that global capitalism is not delivering on its promises. Conservatives are very reluctant to criticize the downsides of capitalism, yet they are not really that different in their own visions of progress in comparison to liberals. It was amusing to hear the American politician Sarah Palin describe Pope Francis’ recent declarations against the “globalization of indifference” as being “a little liberal.” The Pope is liberal? While Democrats look to big government to save them, Republicans look to big business. Don’t they realize that with modern capitalism, big government and big business are joined at the hip? The British historian Hilarie Belloc recognized this over a century ago, when he wrote about the “servile state,” a condition where an unfree majority of non-owners work for the pleasure of a free minority of owners. But getting to your question, I do think more people are beginning to wake up to the problems associated with modern consumerist capitalism. A good example of this is a recent critique of capitalism written by Daniel M. Bell, Jr. entitled The Economy of Desire: Christianity and Capitalism in a Postmodern World. Here is a religious conservative who is saying the great tempter of our age is none other than Walmart. The absurdist philosopher and Nobel Prize winner Albert Camus once said the real passion of the twentieth century was not freedom, but servitude. Jacques Ellul, Camus’s contemporary, would have agreed with that assessment. Both believed that the United States and the Soviet Union, despite their Cold War differences, had one thing in common – the two powers had surrendered to the sovereignty of technology. Camus’ absurdism took a hard turn toward nihilism, while Ellul turned out to be a kind of cultural Jeremiah. It is interesting to me that when I talk to some people about third way ideas, which actually is an old way of thinking about economy, they tell me it can’t be done, that we are now beyond all that, and that the our economic trajectory is unstoppable or inevitable. This retort, I think, reveals how little freedom our system possesses. So, I can’t have a family farm? My small business can’t compete with the big guys? My wife has to work outside the home and I have to outsource the raising of my children? Who would have thought capitalism would lack this much freedom?

AS: And finally are you an optimist? Jacques Ellul seems to have been very pessimistic about us escaping from the iron cage of technological society. Do you think we can still break free?

AWH: I am both optimistic and pessimistic. In America, our rural areas are becoming increasingly depopulated. I see this as an opportunity for resettling the land – those large swaths of fields and forests that encompass about three quarters of our landmass. That is a very nice drawing board if we can figure out how to get back to it. I am also optimistic about the fact that more people are waking up to our troubling times. Other American writers that I would classify as third way proponents include Wendell Berry, Kirkpatrick Sale, Rod Dreher, Mark T. Mitchell, Bill Kauffman, Joseph Pearce and Allan Carlson. There is also a current within the American and British literary tradition, which has served as a critique of modernity. G.K. Chesterton, J.R.R. Tolkien, Dorothy Day and Allen Tate represent this sensibility, which is really a Catholic sensibility, although one does not have to be Catholic to have it. I am amazed at the popularity of novels about Amish people among American evangelical women. Even my wife reads them, and we are Presbyterians! In this country, the local food movement, the homeschool movement and the simplicity movement all seem to be pointing toward a kind of breaking away. You do not have to be Amish to break away from the cage of technological society; you only have to be deliberate and courageous. If we ever break out of the cage in the West, there will be two types of people who will lead such a movement. The first are religious people, both Catholic and Protestant, who will want to create a counter-environment for themselves and their children. The second are the old-school humanists, people who have a sense of history, an appreciation of the cultural achievements of the past, and the ability to see what is coming down the road. If Christians and humanists do nothing, and let modernity roll over them, I am afraid we face what C.S. Lewis called “the abolition of man”. Lewis believed our greatest danger was to have a technological elite – what he called The Conditioners – exert power over the vast majority so that our humanity is squeezed out of us. Of course all of this would be done in the name of progress, and most of us would willingly comply. The Conditioners are not acting on behalf of the public good or any other such ideal, rather what they want are guns, gold, and girls – power, profits and pleasure. The tragedy of all this, as Lewis pointed out, is that if they destroy us, they will destroy themselves, and in the end Nature will have the last laugh.

Link: Neil Postman: Informing Ourselves to Death

The following speech was given at a meeting of the German Informatics Society (Gesellschaft fuer Informatik) on October 11, 1990 in Stuttgart, Germany.

The great English playwright and social philosopher George Bernard Shaw once remarked that all professions are conspiracies against the common folk. He meant that those who belong to elite trades—physicians, lawyers, teachers, and scientists—protect their special status by creating vocabularies that are incomprehensible to the general public.  This process prevents outsiders from understanding what the profession is doing and why—and protects the insiders from close examination and criticism. Professions, in other words, build forbidding walls of technical gobbledegook over which the prying and alien eye cannot see.

Unlike George Bernard Shaw, I raise no complaint against this, for I consider myself a professional teacher and appreciate technical gobbledegook as much as anyone. But I do not object if occasionally someone who does not know the secrets of my trade is allowed entry to the inner halls to express an untutored point of view. Such a person may sometimes give a refreshing opinion or, even better, see something in a way that the professionals have overlooked.

I believe I have been invited to speak at this conference for justsuch a purpose. I do not know very much more about computer technology than the average person—which isn’t very much. I have little understanding of what excites a computer programmer or scientist, and in examining the descriptions of the presentations at this conference, I found each one more mysterious than the next. So, I clearly qualify as an outsider.

But I think that what you want here is not merely an outsider but an outsider who has a point of view that might be useful to the insiders. And that is why I accepted the invitation to speak. I believe I know something about what technologies do to culture, and I know even more about what technologies undo in a culture. In fact, I might say, at the start, that what a technology undoes is a subject that computer experts apparently know very little about. I have heard many experts in computer technology speak about the advantages that computers will bring. With one exception - namely, Joseph Weizenbaum—I have never heard anyone speak seriously and comprehensively about the disadvantages of computer technology, which strikes me as odd, and makes me wonder if the profession is hiding something important. That is to say, what seems to be lacking among computer experts is a sense of technological modesty.

After all, anyone who has studied the history of technology knows that technological change is always a Faustian bargain: Technology giveth and technology taketh away, and not always in equal measure. A new technology sometimes creates more than it destroys. Sometimes, it destroys more than it creates.  But it is never one-sided.

The invention of the printing press is an excellent example.  Printing fostered the modern idea of individuality but it destroyed the medieval sense of community and social integration. Printing created prose but made poetry into an exotic and elitist form of expression. Printing made modern science possible but transformed religious sensibility into an exercise in superstition. Printing assisted in the growth of the nation-state but, in so doing, made patriotism into a sordid if not a murderous emotion.

In the case of computer technology, there can be no disputing that the computer has increased the power of large-scale organizations like military establishments or airline companies or banks or tax collecting agencies. And it is equally clear that the computer is now indispensable to high-level researchers in physics and other natural sciences. But to what extent has computer technology been an advantage to the masses of people? To steel workers, vegetable store owners, teachers, automobile mechanics, musicians, bakers, brick layers, dentists and most of the rest into whose lives the computer now intrudes? These people have had their private matters made more accessible to powerful institutions. They are more easily tracked and controlled; they are subjected to more examinations, and are increasingly mystified by the decisions made about them. They are more often reduced to mere numerical objects. They are being buried by junk mail. They are easy targets for advertising agencies and political organizations. The schools teach their children to operate computerized systems instead of teaching things that are more valuable to children. In a word, almost nothing happens to the losers that they need, which is why they are losers.

It is to be expected that the winners—for example, most of the speakers at this conference—will encourage the losers to be enthusiastic about computer technology. That is the way of winners, and so they sometimes tell the losers that with personal computers the average person can balance a checkbook more neatly, keep better track of recipes, and make more logical shopping lists. They also tell them that they can vote at home, shop at home, get all the information they wish at home, and thus make community life unnecessary. They tell them that their lives will be conducted more efficiently, discreetly neglecting to say from whose point of view or what might be the costs of such efficiency.

Should the losers grow skeptical, the winners dazzle them with the wondrous feats of computers, many of which have only marginal relevance to the quality of the losers’ lives but which are nonetheless impressive. Eventually, the losers succumb, in part because they believe that the specialized knowledge of the masters of a computer technology is a form of wisdom. The masters, of course, come to believe this as well.  The result is that certain questions do not arise, such as, to whom will the computer give greater power and freedom, and whose power and freedom will be reduced?

Now, I have perhaps made all of this sound like a well-planned conspiracy, as if the winners know all too well what is being won and what lost. But this is not quite how it happens, for the winners do not always know what they are doing, and where it will all lead. The Benedictine monks who invented the mechanical clock in the 12th and 13th centuries believed that such a clock would provide a precise regularity to the seven periods of devotion they were required to observe during the course of the day.  As a matter of fact, it did. But what the monks did not realize is that the clock is not merely a means of keeping track of the hours but also of synchronizing and controlling the actions of men. And so, by the middle of the 14th century, the clock had moved outside the walls of the monastery, and brought a new and precise regularity to the life of the workman and the merchant. The mechanical clock made possible the idea of regular production, regular working hours, and a standardized product. Without the clock, capitalism would have been quite impossible. And so, here is a great paradox: the clock was invented by men who wanted to devote themselves more rigorously to God; and it ended as the technology of greatest use to men who wished to devote themselves to the accumulation of money. Technology always has unforeseen consequences, and it is not always clear, at the beginning, who or what will win, and who or what will lose.

I might add, by way of another historical example, that Johann Gutenberg was by all accounts a devoted Christian who would have been horrified to hear Martin Luther, the accursed heretic, declare that printing is “God’s highest act of grace, whereby the business of the Gospel is driven forward.” Gutenberg thought his invention would advance the cause of the Holy Roman See, whereas in fact, it turned out to bring a revolution which destroyed the monopoly of the Church.

We may well ask ourselves, then, is there something that the masters of computer technology think they are doing for us which they and we may have reason to regret? I believe there is, and it is suggested by the title of my talk, “Informing Ourselves to Death”. In the time remaining, I will try to explain what is dangerous about the computer, and why. And I trust you will be open enough to consider what I have to say. Now, I think I can begin to get at this by telling you of a small experiment I have been conducting, on and off, for the past several years. There are some people who describe the experiment as an exercise in deceit and exploitation but I will rely on your sense of humor to pull me through.

Here’s how it works: It is best done in the morning when I see a colleague who appears not to be in possession of a copy of {The New York Times}. “Did you read The Times this morning?,” I ask. If the colleague says yes, there is no experiment that day. But if the answer is no, the experiment can proceed. “You ought to look at Page 23,” I say. “There’s a fascinating article about a study done at Harvard University.”  “Really? What’s it about?” is the usual reply. My choices at this point are limited only by my imagination. But I might say something like this: “Well, they did this study to find out what foods are best to eat for losing weight, and it turns out that a normal diet supplemented by chocolate eclairs, eaten six times a day, is the best approach. It seems that there’s some special nutrient in the eclairs—encomial dioxin—that actually uses up calories at an incredible rate.”

Another possibility, which I like to use with colleagues who are known to be health conscious is this one: “I think you’ll want to know about this,” I say. “The neuro-physiologists at the University of Stuttgart have uncovered a connection between jogging and reduced intelligence. They tested more than 1200 people over a period of five years, and found that as the number of hours people jogged increased, there was a corresponding decrease in their intelligence. They don’t know exactly why but there it is.”

I’m sure, by now, you understand what my role is in the experiment: to report something that is quite ridiculous—one might say, beyond belief. Let me tell you, then, some of my results: Unless this is the second or third time I’ve tried this on the same person, most people will believe or at least not disbelieve what I have told them. Sometimes they say: “Really? Is that possible?” Sometimes they do a double-take, and reply, “Where’d you say that study was done?” And sometimes they say, “You know, I’ve heard something like that.”

Now, there are several conclusions that might be drawn from these results, one of which was expressed by H. L. Mencken fifty years ago when he said, there is no idea so stupid that you can’t find a professor who will believe it. This is more of an accusation than an explanation but in any case I have tried this experiment on non-professors and get roughly the same results. Another possible conclusion is one expressed by George Orwell—also about 50 years ago—when he remarked that the average person today is about as naive as was the average person in the Middle Ages. In the Middle Ages people believed in the authority of their religion, no matter what. Today, we believe in the authority of our science, no matter what.

But I think there is still another and more important conclusion to be drawn, related to Orwell’s point but rather off at a right angle to it. I am referring to the fact that the world in which we live is very nearly incomprehensible to most of us. There is almost no fact—whether actual or imagined—that will surprise us for very long, since we have no comprehensive and consistent picture of the world which would make the fact appear as an unacceptable contradiction. We believe because there is no reason not to believe. No social, political, historical, metaphysical, logical or spiritual reason. We live in a world that, for the most part, makes no sense to us. Not even technical sense. I don’t mean to try my experiment on this audience, especially after having told you about it, but if I informed you that the seats you are presently occupying were actually made by a special process which uses the skin of a Bismark herring, on what grounds would you dispute me? For all you know—indeed, for all I know—the skin of a Bismark herring could have made the seats on which you sit. And if I could get an industrial chemist to confirm this fact by describing some incomprehensible process by which it was done, you would probably tell someone tomorrow that you spent the evening sitting on a Bismark herring.

Perhaps I can get a bit closer to the point I wish to make with an analogy: If you opened a brand-new deck of cards, and started turning the cards over, one by one, you would have a pretty good idea of what their order is. After you had gone from the ace of spades through the nine of spades, you would expect a ten of spades to come up next. And if a three of diamonds showed up instead, you would be surprised and wonder what kind of deck of cards this is. But if I gave you a deck that had been shuffled twenty times, and then asked you to turn the cards over, you would not expect any card in particulara three of diamonds would be just as likely as a ten of spades. Having no basis for assuming a given order, you would have no reason to react with disbelief or even surprise to whatever card turns up.

The point is that, in a world without spiritual or intellectual order, nothing is unbelievable; nothing is predictable, and therefore, nothing comes as a particular surprise.

In fact, George Orwell was more than a little unfair to the average person in the Middle Ages. The belief system of the Middle Ages was rather like my brand-new deck of cards. There existed an ordered, comprehensible world-view, beginning with the idea that all knowledge and goodness come from God. What the priests had to say about the world was derived from the logic of their theology. There was nothing arbitrary about the things people were asked to believe, including the fact that the world itself was created at 9 AM on October 23 in the year 4004 B. C. That could be explained, and was, quite lucidly, to the satisfaction of anyone. So could the fact that 10,000 angels could dance on the head of a pin. It made quite good sense, if you believed that the Bible is the revealed word of God and that the universe is populated with angels. The medieval world was, to be sure, mysterious and filled with wonder, but it was not without a sense of order. Ordinary men and women might not clearly grasp how the harsh realities of their lives fit into the grand and benevolent design, but they had no doubt that there was such a design, and their priests were well able, by deduction from a handful of principles, to make it, if not rational, at least coherent.

The situation we are presently in is much different. And I should say, sadder and more confusing and certainly more mysterious. It is rather like the shuffled deck of cards I referred to. There is no consistent, integrated conception of the world which serves as the foundation on which our edifice of belief rests. And therefore, in a sense, we are more naive than those of the Middle Ages, and more frightened, for we can be made to believe almost anything. The skin of a Bismark herring makes about as much sense as a vinyl alloy or encomial dioxin.

Now, in a way, none of this is our fault. If I may turn the wisdom of Cassius on its head: the fault is not in ourselves but almost literally in the stars. When Galileo turned his telescope toward the heavens, and allowed Kepler to look as well, they found no enchantment or authorization in the stars, only geometric patterns and equations. God, it seemed, was less of a moral philosopher than a master mathematician. This discovery helped to give impetus to the development of physics but did nothing but harm to theology. Before Galileo and Kepler, it was possible to believe that the Earth was the stable center of the universe, and that God took a special interest in our affairs. Afterward, the Earth became a lonely wanderer in an obscure galaxy in a hidden corner of the universe, and we were left to wonder if God had any interest in us at all. The ordered, comprehensible world of the Middle Ages began to unravel because people no longer saw in the stars the face of a friend.

And something else, which once was our friend, turned against us, as well. I refer to information. There was a time when information was a resource that helped human beings to solve specific and urgent problems of their environment. It is true enough that in the Middle Ages, there was a scarcity of information but its very scarcity made it both important and usable. This began to change, as everyone knows, in the late 15th century when a goldsmith named Gutenberg, from Mainz, converted an old wine press into a printing machine, and in so doing, created what we now call an information explosion. Forty years after the invention of the press, there were printing machines in 110 cities in six different countries; 50 years after, more than eight million books had been printed, almost all of them filled with information that had previously not been available to the average person. Nothing could be more misleading than the idea that computer technology introduced the age of information. The printing press began that age, and we have not been free of it since.

But what started out as a liberating stream has turned into a deluge of chaos. If I may take my own country as an example, here is what we are faced with: In America, there are 260,000 billboards; 11,520 newspapers; 11,556 periodicals; 27,000 video outlets for renting tapes; 362 million tv sets; and over 400 million radios. There are 40,000 new book titles published every year (300,000 world-wide) and every day in America 41 million photographs are taken, and just for the record, over 60 billion pieces of advertising junk mail come into our mail boxes every year. Everything from telegraphy and photography in the 19th century to the silicon chip in the twentieth has amplified the din of information, until matters have reached such proportions today that for the average person, information no longer has any relation to the solution of problems.

The tie between information and action has been severed. Information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one’s status. It comes indiscriminately, directed at no one in particular, disconnected from usefulness; we are glutted with information, drowning in information, have no control over it, don’t know what to do with it.

And there are two reasons we do not know what to do with it. First, as I have said, we no longer have a coherent conception of ourselves, and our universe, and our relation to one another and our world. We no longer know, as the Middle Ages did, where we come from, and where we are going, or why. That is, we don’t know what information is relevant, and what information is irrelevant to our lives. Second, we have directed all of our energies and intelligence to inventing machinery that does nothing but increase the supply of information. As a consequence, our defenses against information glut have broken down; our information immune system is inoperable. We don’t know how to filter it out; we don’t know how to reduce it; we don’t know to use it. We suffer from a kind of cultural AIDS.

Link: "We Need to Talk About TED"

This is my rant against TED, placebo politics, “innovation,” middlebrow megachurch infotainment, etc., given atTEDx San Diego at their invitation (thank you to Jack Abbott and Felena Hanson). It’s very difficult to do anything interesting within the format, and even this seems like far too much of a ‘TED talk’, especially to me. In California R&D World, TED (and TED-ism) is unfortunately a key forum for how people communicate with one another. It’s weird, inadequate and symptomatic, to be sure, but it is one of ‘our’ key public squares, however degraded and captured. Obviously any sane intellectual wouldn’t go near it. Perhaps that’s why I was (am) curious about what (if any) reverberation my very minor heresy might have: probably nothing, and at worse an alibi and vaccine for TED to warn off the malaise that stalks them? We’ll have to see. The text of the talk is below, and was also published as an Op-Ed by The Guardian

In our culture, talking about the future is sometimes a polite way of saying things about the present that would otherwise be rude or risky.

But have you ever wondered why so little of the future promised in TED talks actually happens? So much potential and enthusiasm, and so little actual change. Are the ideas wrong? Or is the idea about what ideas can do all by themselves wrong?

I write about entanglements of technology and culture, how technologies enable the making of certain worlds, and at the same time how culture structures how those technologies will evolve, this way or that. It’s where philosophy and design intersect.

So the conceptualization of possibilities is something that I take very seriously. That’s why I, and many people, think it’s way passed time to take a step back and ask some serious questions about the intellectual viability of things like TED.

So my TED talk is not about my work or my new book—the usual spiel—but about TED itself, what it is and why it doesn’t work.

The first reason is over-simplification.

To be clear, I think that having smart people who do very smart things explain what they doing in a way that everyone can understand is a good thing. But TED goes way beyond that.

Let me tell you a story. I was at a presentation that a friend, an Astrophysicist, gave to a potential donor. I thought the presentation was lucid and compelling (and I’m a Professor of Visual Arts here at UC San Diego so at the end of the day, I know really nothing about Astrophysics). After the talk the sponsor said to him, “you know what, I’m gonna pass because I just don’t feel inspired… you should be more like Malcolm Gladwell.”

At this point I kind of lost it. Can you imagine?

Think about it: an actual scientist who produces actual knowledge should be more like a journalist who recycles fake insights! This is beyond popularization. This is taking something with value and substance  and coring it out so that it can be swallowed without chewing. This is not the solution to our most frightening problems—rather this is one of our most frightening problems.

So I ask the question: does TED epitomize a situation in which a scientist (or an artist or philosopher or activist or whoever) is told that their work is not worthy of support, because the public doesn’t feel good listening to them?

I submit that Astrophysics run on the model of American Idol is a recipe for civilizational disaster.

What is TED?

So what is TED exactly?

Perhaps it’s the proposition that if we talk about world-changing ideas enough, then the world will change.  But this is not true, and that’s the second problem.

TED of course stands for Technology, Entertainment, Design, and I’ll talk a bit about all three. I Think TED actually stands for: middlebrow megachurch infotainment

The key rhetorical device for TED talks is a combination of epiphany and personal testimony (an “epiphimony” if you like ) through which the speaker shares a personal journey of insight and realization, its triumphs and tribulations.

What is it that the TED audience hopes to get from this? A vicarious insight, a fleeting moment of wonder, an inkling that maybe it’s all going to work out after all? A spiritual buzz?

I’m sorry but this fails to meet the challenges that we are supposedly here to confront. These are  complicated and difficult and are not given to tidy just-so solutions. They don’t care about anyone’s experience of optimism. Given the stakes, making our best and brightest waste their time –and the audience’s time— dancing like infomercial hosts is too high a price. It is cynical.

Also, it just doesn’t work.

Recently there was a bit of a dust up when TED Global sent out a note to TEDx organizers asking them not to not book speakers whose work spans the paranormal, the conspiratorial, New Age “quantum neuroenergy,” etc: what is called Woo. Instead of these placebos, TEDx should instead curate talks that are imaginative but grounded in reality.  In fairness, they took some heat, so their gesture should be acknowledged. A lot of people take TED very seriously, and might lend credence to specious ideas if stamped with TED credentials. “No” to placebo science and medicine.

But…the corollaries of placebo science and placebo medicine are placebo politics and placebo innovation. On this point, TED has a long ways to go.

Perhaps the pinnacle of placebo politics and innovation was featured at TEDx San Diego in 2011. You’re familiar I assume with Kony2012, the social media campaign to stop war crimes in central Africa? So what happened here? Evangelical surfer Bro goes to help kids in Africa. He makes a campy video explaining genocide to the cast of Glee. The world finds his public epiphany to be shallow to the point of self-delusion. The complex geopolitics of Central Africa are left undisturbed. Kony’s still there. The end.

You see, when inspiration becomes manipulation, inspiration becomes obfuscation. If you are not cynical you should be skeptical. You should be as skeptical of placebo politics as you are placebo medicine.

T and Technology

T - E - D. I’ll go through them each quickly.

So first Technology…

We hear that not only is change accelerating but that the pace of change is accelerating as well.

While this is true of computational carrying-capacity at a planetary level, at the same time—and in fact the two are connected—we are also in a moment of cultural de-acceleration.

We invest our energy in futuristic information technologies, including our cars, but drive them home to kitsch architecture copied from the 18th century. The future on offer is one in which everything changes, so long as everything stays the same. We’ll have Google Glass, but still also business casual.

This timidity is our path to the future? No, this is incredibly conservative, and there is no reason to think that more Gigaflops will inoculate us.

Because, if a problem is in fact endemic to a system, then the exponential effects of Moore’s Law also serve to amplify what’s broken. It is more computation along the wrong curve, and I don’t think this is necessarily a triumph of reason.

Part of my work explores deep technocultural shifts, from post-humanism to the post-anthropocene, but TED’s version has too much faith in technology, and not nearly enough commitment to technology. It is placebo technoradicalism, toying with risk so as to re-affirm the comfortable.

So our machines get smarter and we get stupider. But it doesn’t have to be like that. Both can be much more intelligent. Another futurism is possible.

E and Economics

A better ‘E’ in TED would stand for Economics, and the need for, yes imagining and designing, different systems of valuation, exchange, accounting of transaction externalities, financing of coordinated planning, etc. Because States plus Markets, States versus Markets, these are insufficient models, and our conversation is stuck in Cold War gear.

Worse is when economics is debated like metaphysics, as if the reality of a system is merely a bad example of the ideal.

Communism in theory is an egalitarian utopia.

Actually existing Communism meant ecological devastation, government spying, crappy cars and gulags.

Capitalism in theory is rocket ships, nanomedicine, and Bono saving Africa.

Actually existing Capitalism means Walmart jobs, McMansions, people living in the sewers under Las Vegas, Ryan Seacrest…plus —ecological devastation, government spying, crappy public transportation and for-profit prisons.

Our options for change range from basically what we have plus a little more Hayek, to what we have plus a little more Keynes. Why?

The most  recent centuries have seen extraordinary accomplishments in improving quality of life. The paradox is that the system we have now —whatever you want to call it— is in the short term what makes the amazing new technologies possible, but in the long run it is also what suppresses their full flowering.  Another economic architecture is prerequisite.

D and Design

Instead of our designers prototyping the same “change agent for good” projects over and over again, and then wondering why they don’t get implemented at scale, perhaps we should resolve that design is not some magic answer. Design matters a lot, but for very different reasons.  It’s easy to get enthusiastic about design because, like talking about the future, it is more polite than referring to white elephants in the room..

Such as…

Phones, drones and genomes, that’s what we do here in San Diego and La Jolla. In addition to the other  insanely great things these technologies do, they are the basis of NSA spying, flying robots killing people, and the wholesale privatization of  biological life itself. That’s also what we do.

The potential for these technologies are both wonderful and horrifying at the same time, and to make them serve good futures, design as “innovation” just isn’t a strong enough idea by itself. We need to talk more about design as “immunization,” actively preventing certain potential “innovations” that we do not want from happening.

And so…

As for one simple take away… I don’t have one simple take away, one magic idea. That’s kind of the point. I will say that if and when the key problems facing our species were to be solved, then perhaps many of us in this room would be out of work (and perhaps in jail).

But it’s not as though there is a shortage of topics for serious discussion. We need a deeper conversation about the difference between digital cosmopolitanism and Cloud Feudalism (and toward that, a queer history of computer science and Alan Turing’s birthday as holiday!)

I would like new maps of the world, ones not based on settler colonialism, legacy genomes and bronze age myths, but instead on something more… scalable.

TED today is not that.

Problems are not “puzzles” to be solved. That metaphor assumes that all the necessary pieces are already on the table, they just need to be re-arranged and re-programmed. It’s not true.

“Innovation” defined as moving the pieces around and adding more processing power is not some Big Idea that will disrupt a broken status quo: that precisely is the broken status quo.

One TED speaker said recently, “If you remove this boundary, …the only boundary left is our imagination.” Wrong.

If we really want transformation, we have to slog through the hard stuff (history, economics, philosophy, art, ambiguities, contradictions).  Bracketing it off to the side to focus just on technology, or just on innovation, actually prevents transformation.

Instead of dumbing-down the future, we need to raise the level of general understanding to the level of complexity of the systems in which we are embedded and which are embedded in us. This is not about “personal stories of inspiration,” it’s about the difficult and uncertain work of de-mystification and re-conceptualization: the hard stuff that really changes how we think. More Copernicus, less Tony Robbins.

At a societal level, the bottom line is if we invest things that make us feel good but which don’t work, and don’t invest things that don’t make us feel good but which may solve problems, then our fate is that it will just get harder to feel good about not solving problems.

In this case the placebo is worse than ineffective, it’s harmful. It’s diverts your interest, enthusiasm and outrage until it’s absorbed into this black hole of affectation.

Keep calm and carry on “innovating”… is that the real message of TED? To me that’s not inspirational, it’s cynical.

In the U.S. the right-wing has certain media channels that allow it to bracket reality… other constituencies have TED.  

Link: An interview with Adam Curtis, producer of the BBC documentaries The Power of Nightmares and The Century of the Self

Adam Curtis remains at the forefront of documentary filmmaking. He began in the early 80s, but his first major breakthrough came in 1992 withPandora’s Box, a film which warned of the dangers technocratic politics and saw him pick up his first of six career BAFTAs.

Holed up in a BBC basement, Curtis brings together disparate subjects and uses archival footage to chart political history. His love of music is playfully interwoven into the narrative, whilst his unique, deadpan voice discusses the failures of political systems and ideologies.

In his 2004 film, Power of Nightmares, his most remarkable piece of work to date, Curtis debunked the myth that al-Qaeda was an organised global network posing an apocalyptic threat to the West, which, In a post-911 context that saw governments and mass media exaggerating al-Qaeda’s size and influence, was a bold message. Time, of course, has been incredibly kind to his analysis.

After a six month chase attempting to secure an interview, I finally came into contact with him at the Latitude Festival where he was discussing static culture, his latest area of fascination. After forcing a written invitation into his hand, not long later I met him at the British Library in central London. He turned out to be engaging and personable, veering frantically one from one topic to another, remaining insightful and charming throughout.

What follows is an extract from a long conversation regarding his work, politics, journalism and our willing acceptance of the computer systems that guide our choices.

[…] So this idea that computer systems are dictating too much to us, which is reducing our imagination to see a future … how are we going to break that?

I have a theory that people might get fed up with computers, quite simply. I think the interesting thing about the Edward Snowden case is it makes you realise how much the cloud thing on the Internet is a surveillance system. I don’t mean it is a conspiracy. It’s sort of like you are part of something you might not necessarily want to be part of. And I just wonder whether, in fact – the Internet won’t go away – but its magic will disappear. Our delight in screens that we can go like that with [AC scrolls with fingers] will disappear. It will become a functional local library, coupled with sort of weird people chatting online, and the stuff that you don’t know is true or not, and another culture will arise separately from it, which might go back a bit to books and newspapers. I still think newspapers might come back if they could do some good journalism. I mean the reason we don’t read newspapers these days is because the journalism is so boring.

I’ve heard you lament the fact that the financial crash hasn’t presented to us in understandable terms by the media …

I think this is a really interesting thing. So much of the way the present world is managed is through – not even systems – its organizations, which are boring. They don’t have any stories to tell. Economics, for example, which is central to our life at the moment … I just drift off when people talk about collateralised debt obligations, and I am not alone. It’s impossible to illustrate on television, it’s impossible to tell a story about it, because basically it’s just someone doing keystrokes somewhere in Canary Wharf in relation to a server in … I dunno … Denver, and something happens, and that’s it. I use the phrase, ‘They are unstoryfiable’. Journalism cannot really describe it any longer, so it falls back onto its old myths of dark enemies out there. Whether those dark enemies are Al-Qaeda, Soviets, or criminal masterminds who are grooming children for white slavery. All of which may or may not be true, but it’s what they fall back on and don’t report. I mean, the Guardian made a noble attempt to describe that company, Serco, which no-one has ever heard of, but which is an incredibly powerful outsourcer of government things, and it’s been doing some not very good things recently, but it’s incredibly boring and that’s the problem. Journalism is a trick to find a way of making the boring interesting, and as yet it hasn’t found a way of doing it.

Journalism isn’t describing to us the world as it is, which we know is there, but we want someone to make sense of it for us. We want someone to explain to us about what’s going on with the banks, but in ways we can get emotionally. We want someone to describe to us who these strange people are like G4S, who constantly turn up doing odd things like at the Olympics and then disappear again. We want people to notice that.  Just like we want music that will actually take us out of ourselves and make us feel not alone and emotionally part of something. Both music and journalism are totally failing to do that at the moment. And it’s a moment in history when they haven’t caught up, maybe something else will catch up and describe it to us.

Will journalism catch up?

Yeah, of course it will, what else is there? I mean I don’t buy this internet … the internet is just a new system of delivery, it’s not a new content thing. Of course journalism will catch up, it’s just no one has found it yet. It’s a way of connecting with you and me emotionally.

So, what are we waiting for? Are we waiting for a particular journalist with an idea?

Yeah. Or a group of journalists who will find a way of connecting with us. It happened back in the 60s with what was called “New Journalism” because they had the funny idea that you spend time with someone and you write about what was in that person’s head, and then you described it like a novelist. And that connected with the new sensibility. 

Well, the new sensibility at the moment is a sense of isolation and a sense of, “What the hell is this all for?” and a sense of uncertainty and anxiety. That’s what is around at the moment. No one has captured that yet in a way that makes you feel connected to what they’re saying. Instead what we have are these people who play on the anxiety which is not right, you know: “All the world’s going to die … Al-Qaeda is going to kill you with an atomic weapon coming up the Thames on a boat.” They are taking serious issues but amplifying them to try and scare you to get your attention, but in fact, what they should be doing is trying to connect with you emotionally and actually describe the world and help you understand it more. Then it excites you andfrightens you; I’m not pleading for a boring journalism, I’m pleading for a better journalism. And I think the same is true of music, which takes you out of yourself. 

What about The Power of Nightmares? The central theme of that is that Al-Qaeda and terrorism isn’t as apocalyptic as some suggested. I think time has been kind to that message. At first people were probably thinking you were …

Exaggerating?

Yes, but I think that film stands up.

I would argue that what I said back then absolutely stands up, despite all the horrors that have happened. What I was saying has absolutely been proved by the facts. There is no organised network; there is a serious, dangerous and very nasty threat from small groups of disaffected Islamists who have no real form of connection with each other and are inspired by a corroded and corrupted idea, and they are actually on the decline. That doesn’t mean it’s not a serious threat.

Also, a lot of my colleagues – on the basis of absolutely no evidence – created a complete fiction of this apocalyptic, organized network and they should be ashamed of doing it.

What do you think about the rise of – it’s not really a rise – the presence of the EDL and this anti-Muslim narrative that stemmed from a lot of what you were trying to push back against?

It’s not that strong. It’s stronger in France than it is here, and also again, so much of that is disaffection with unemployment and uncertainty. I mean the real problem of our time is the uncertainty that people feel, and no politicians are really dealing with it, so of course they take it out on easy targets like that. UKIP, I don’t think is a significant force, I really don’t. The really interesting thing of our time is not what we had back with Al-Qaeda, which is journalists trying to tell us all these fears. It’s just the general sort of emptiness and unknowingness, politicians not having the faintest clue what’s going on. It’s a sense of drift that no one has really got hold of now.

Going back to music and journalism, we don’t have the sense that anyone is reporting to us, or communicating to us, what is really going on in the world at the moment. We have got this idea that we have screens around us all the time and we see everything and we somehow know everything that is happening in the world because it is reported to us 24 hours a day but actually we also have a sense that we haven’t got the faintest idea of what’s going on. Things just come and go like that, and no journalism is making sense of it. It reports it to us, but it doesn’t make sense of it. Music and culture is absolutely failing to create a framework of sensibility for us to understand it. It’s just rehashing stuff from … I don’t know … Marcel Duchamp in 1919. 

Again it’s in a static way, because no one knows what’s going on. The fears have diminished because that was a reaction. Now we’re in this “I don’t know what’s going on” so let’s just go listen to Coldplay… [laughs] Not that there is anything wrong with Coldplay.

So, when the financial crash happened, I expected more socialist ideas to start penetrating the narrative, and I don’t really think that that has happened. Why do you think that is?

That’s one of the great shocking things of the last decade … I mean, it’s astonishing. The failure of the left to engage with what happened after 2008 is just mind-boggling. They should be absolutely ashamed of themselves. It’s amazing, they just go around mouthing stuff with absolutely no way of explaining what’s going on in a way that doesn’t sound again a bit like Savages. They are mouthing the sort of stuff that was said in the 1980s about Margaret Thatcher. 

We are in a genuinely new world at the moment and no one knows its dimensions and they have to come up with something. The Occupy movement absolutely astonished me. They had a brilliant slogan the 99 and 1 per cent – that was the first time I thought someone’s got it, but then they completely blew it. I went to their meetings and they have been completely captivated by this pseudo-managerial theory of a new kind of democracy where there are no leaders and everyone sits around gesticulating if they disagree. It was one of the most absurd ideas in modern politics. 

If you are dealing with questions of power you have to understand power, and you can’t pretend it doesn’t exist, either on your side or their side. The point about managerialism is it pretends power doesn’t exist; it’s a way of keeping you in your place. For them to buy into that was the most cosmically stupid ideas I have ever heard in my life. If you want to change the world you have to deal with questions of power: the power of the ones who don’t want you to change it, and the power of those assembled on your side who do want to change it. Humans are humans, and power is a really complicated thing and you can’t ignore it and by ignoring it they let everything go, so now there is a vacuum, an absolute vacuum. We have alternative comedians telling us everything is shit … well that’s nothing! I know that.

In The Century of Self you discuss this idea that politicians interview the public through focus groups and then use the results to dictate policy. That seems the wrong way round to me.

If you like this, then you’ll like that. It’s the same thing. It’s what’s called a market idea of democracy, and the Market Idea of democracy says that real democracy is not about taking people somewhere else: it’s about finding out what they want and giving it to them. But in market terms, that’s absolutely right. I don’t have any problems with the free market, its fine, that’s what it does. It’s extremely appropriate in finding out what goods you want and giving it to you and also knowing what you might like and giving it to you. When it is then transferred into politics, that’s when the problem happens. When it is then transferred into culture and journalism, everything just becomes reinforcing. It becomes like a feedback loop. So in the BBC we do this, we know what journalism works for people and we give more of that and it becomes … it creates that very static world but that’s not necessarily the fault of the system.

There are other ways of journalism, it’s just that journalists don’t know how to do it any longer because they haven’t really got the new apparatus to understand and describe the world to us. So they rely on just going to ask you. I know this myself; a lot of journalists I know in television and in print go on about, “Oh if only we didn’t have this terrible system where we are forced to do these focus groups and stuff we could do much better journalism” then you say to them, “Well, what sort of journalism would you do?” And they come out with the same old stuff: that bankers are bad, spies are terrible, and you think actually maybe this is all a bit of a smokescreen to disguise the fact that you sort of run out of puff yourself and everyone is waiting. I have this terrible feeling that we are all waiting for something new, some new view of the world to come along and that maybe we are sort of at the end of our own cold war at the moment.

All the institutions are declining. Universities are declining, spies are completely useless, and banks were our last shot at giving us cheap money and keep things going when industry collapsed. Its all a little bit like these giant institutions are all declining, a bit like the eighties and we are waiting for something new to come along and culture is letting us down. I mean everyone is obsessed by culture at the moment and it’s supposed to be radical. I moved into this world a bit with the Massive Attack thing and they all think they are so radical. They are not radical at all; they play back to us old ideas all the time. I mean all the so-called radical art that was around in the last two Manchester festivals I’ve been at could have been done in 1919 by Marcel Duchamp. That’s not to say it’s bad, but to pretend that it is somehow a new radical vision of the world is wrong and it’s reinforcing what’s been around since the early days of modernism. Some of it is very good – Savages are very good – but it’s been around. It’s enjoyable and it’s fun, but this idea that somehow art can point the way to the future is not what seems to be happening to me at the moment. Art is stuck in the past, just like music is stuck in the past, and journalism is stuck in the past. Something will happen; it’s quite an exciting idea, really.

One doesn’t know what it would be, and it may be right at the margins, it may have nothing to do with journalism. I’m making this up because my dates are so bad, but If you were around in the 1860’s and you have these people wandering around going, “We have this idea of history, that it is like a science, and that you can analyze it and logically that means that the class structure will happen like this and we will have Marxists …” You’d think they were nutty, that they were geeks. They were probably the geeks of their time, they were right at the fringe. I think maybe we are far too much of the establishment. All these radicals – including myself – we think we are somewhere radical but actually we are deeply, deeply, deeply conservative at the moment. And what has a veneer of radicalism is actually possibly the most conservative force at the moment. By that I mean radical culture, art, music and a lot of radical journalism and radical politics – whilst none of it is bad – its mechanisms, and ways of seeing the world are borrowed from the past and its stuck in the past. It’s stuck with a nostalgia for a radicalism of the past and that’s not the radicalism that’s necessary.

Yes there is a lot of poverty around, yes there is a lot of people being thrown out of work – I know all that – but the really big thing that is in the back of most people’s minds at the moment is a sense of total uncertainty, loneliness, isolation and not knowing where they’re going for what they’re doing. A sense of unconnectedness. And if you really want to change the world and make it better for those who are out of work and who are poor you have got to get the bigger group on your side and the way you get that bigger group on your side is by connecting with those uncertainties in the back of their minds, the loneliness the uncertainty and sense of isolation that is really big at the moment. And no one is doing that, no one has got a music, no one has got a journalism, a politics, a culture that heartfully connects with it. People are yearning for it; I know it I feel it. I like the culture, I like reading some good journalism, I like going to see bands but none of it goes, “Yes, that’s it that gets me.” That’s what I think, and we are just waiting for it. It’s quite exciting because you know it can’t go on like this. Something is going to come along.

I found it really interesting in The Century of Self, this idea that New Labour were seen as visionary, but they were just charlatans in a way weren’t they? They stole a lot of ideas from the Democrats in America. Peter Mandelson, for example …

I wouldn’t say they were charlatans, I would say they were opportunists. They were technocrats. Basically they were technocrats who stole an ideological cloak of Labour, and draped over what was really … They are managerial technocrats, because that’s really all focus groups are. It’s a managerial idea. It’s going, well listen we’ll just ask them what they want and give it to them and that will make them happy and the key thing is to go and identify the swing voter, that’s the key technocrat thing. They went and identified who were the swing voters; they’re the ones who never make up their minds. Ask them what they want, give it to them, and bingo – you’ll get the swing voters on your side. Which means that a great deal of your future is decided by indecisive people in Uttoxeter.

Philip Gould: do you think he was the thinking behind the New Labour movement?

Yeah, he was clever; he was the technocrat. Because Gould spotted early on the whole idea of focus groups, and how you could extend them. What went wrong with New Labour, which I think is quite interesting, is that Blair got fed up with focus groups and started to do something off his own back, which was Iraq. It was almost like he got fed up and felt imprisoned by them. No one has ever explained to me why Blair went to war in Iraq. My own personal theory is that he got so fed up with having to focus group everything that he just thought, “Oh sod it, I’m going to do something off my own back” and then he discovered he could. Because the really interesting thing about that time – it’s really odd – you have this obsession with focus group politics, which is that you have to ask people what they want otherwise they will turn against you and you will lose power. Yet at the same time, you can decide to invade Iraq, two million people can come out onto the streets of London, you go, “Fuck off!” and they go “Alright” and you go home. I mean where is the power in society?

It’s the same with the economic thing, isn’t it? We are told this is what’s happened so we have to accept X, Y, Z cuts in this area.

But that’s because the left hasn’t come up with an alternative theory. In a way you can’t really blame people for going, “Ok” because the job of the so-called left is to come up with an explanation that makes me think “Oh yeah, I get it and that’s wrong. I must do something about it. I get it, they have simplified it down to me, and I get it”. But if you start talking to me about austerity versus collateralised debt obligations and was the austerity to do with the banks being bailed out or because Gordon Brown spent too much money on hospitals? I just drift away. I go and watch The Departed on Channel Four and think about zombies.

Why do you think people at ground level seem to have more anger and ire towards what they perceive as feckless welfare claimants at the bottom, than they do where the real problem exists, at the top? Why do you think there is such a disconnect?

Because it’s a very easy thing to do and it’s a traditional thing on the right to do, to blame others for stealing from you. All the left has got to do is find an equally simple way of explaining what is going on at the top and re-divert your attention and anger to that, but they are not doing it. I have no idea why they’re not doing it. I’m not a politician; I’m a journalist. It’s not my job to do it and especially with the BBC it’s not my job to do it, but I am absolutely astonished that they’re not doing it. They really should hang their heads in shame, because it means they are not up to their jobs. If the right can do the divide and rule thing which you have just described of getting lower middle class people to get pissed off with the working class claimants, I’m afraid the left’s job is to take that anger and uncertainty which the right are accessing and redirect it to better and more purposeful -from their point of view – targets. And they are not doing it, they are just not.

The right seem to set the terms of the debate and the left operate within it, don’t they?

Yes, you have to set your own terms and that’s all it is.

What’s next for you, then?

I think I’m going to do a history of entertainment, and the relationship between entertainment and power. I am subtitling the rise of the media industrial complex, from gangsters and Jimmy Savile in Leeds in the 1950s, to YouTube and Google in the present day, via Rupert Murdoch. Entertainment and Power: The Rise of the Media Industrial Complex.

There you go.

Link: A Modest Utopia: Sixty Years of Dissent

Few small magazines remain so for long. A handful get larger over time; most die at a fairly young age. One exception is Dissent, the independent left-wing quarterly that was founded in the dark days of the McCarthy era by the literary critic Irving Howe and the sociologist Lewis Coser, which will celebrate its sixtieth anniversary later this week. For decades, Dissents subscription list has hovered around the mid-four figures, never going much higher or lower; today, it has just over ten thousand followers on Twitter, its editors never pay themselves a penny, and its writers don’t make a whole lot more. Creatures that function at a consistently low metabolic rate are prone to being picked off by predators or to just stop moving. And yet, Dissent has survived its founding editors, eleven Presidencies, the rise and fall of neo-conservatism, Ramparts, The Public Interest, Talk, and George. The reasons for this longevity are more interesting than sheer persistence.

Howe and Coser belonged to the anti-Stalinist left, which by the early fifties placed them in one of the tiniest and most precarious minorities of all. They started Dissent to fight battles on all sides—against McCarthyism, against Communism, against the drift toward quiescence and conformity among other intellectuals. As to what the magazine was for, in the second issue, in the unpromising spring of 1954, the editors wrote, adapting Tolstoy, “Socialism is the name of our desire.” They started with just enough money to put out four issues.

There’s something absurdly, quixotically ambitious about launching a socialist magazine in America at any moment, but it’s hard to think of a less favorable time than sixty years ago, with Eisenhower in the White House, Joe McCarthy at the height of his demagogic power, and Stalin’s corpse barely cold. Yet Dissent outlasted predictions of its early demise and attracted writers and thinkers well beyond its tiny size—Norman Mailer (“The White Negro,” on the phenomenon of the hipster, created a minor sensation in 1957), the activist and writer Michael Harrington, the social critic Paul Goodman, the art historian Meyer Schapiro, the political philosopher Michael Walzer. Dissents socialism was never a doctrinaire program. It was closer to a spirit of criticism, a vision of a more just society, an openness to movements of democratic change, a refusal to accept the given on its own terms. Howe liked to use the word “utopia,” advisedly. He didn’t mean the paradise of dogmatists, but something much more modest and humane—a yearning for what is not but may be. “Whether a real option or a mere fantasy,” he wrote in his autobiography, “this utopia is as needed by mankind as bread and shelter.”

I began writing for Dissent in my twenties, when Howe accepted an essay I’d sent in cold, about the West African country where I served with the Peace Corps. “Letter from Togo” didn’t exactly fit with the magazine’s longstanding concerns, but I will never forget the thrill of reading Howe’s typewritten letter of acceptance, which informed me that the piece would have to be cut, and enclosed his phone number. He was an immensely accomplished, formidable, busy man—the model of a public intellectual in a way that, by the nineteen-eighties, hardly existed any longer—but he told me to call him collect. There was something of the magazine’s egalitarian spirit in this, and Howe became the closest thing to a mentor that I ever had, with most of the relationship taking place in my own mind, and outlasting his too-early death in 1993, at seventy-two.

I wrote for Dissent for the next fifteen years, and eventually joined the editorial board. Living in Boston at the time, I would take the train down to New York for quarterly meetings, where two generations of leftists—the Old and the New—sat around a large conference table at the New School and argued about democratic socialism, American and world politics, and the latest issue of the magazine, some of them mumbling, others ringingly clear, a conversation that appeared to have been going on since well before my birth. They all seemed to have known each other for decades, knew one another’s biases and weaknesses, could predict every political position and rhetorical move, and put up with one another in the way of a family whose common past and shared bonds outweighs every annoyance. I never felt younger than in my late thirties on those Manhattan Saturdays at Dissent editorial meetings. I wondered how long the magazine would last.

In the early aughts, at board meetings and holiday parties, I started noticing the presence of people even younger than me. There was a new generation at Dissent! This astonished me. It was strange enough that someone my age had found his way to a publication born out of the Shachtmanite schisms of the early fifties—here were men and women in their twenties, funny, lively people, good writers, interested in the labor movement, but also in anti-corporate feminism, TV culture, Green politics, and Central American teen-agers. With a new editor, the historian Michael Kazin, they would breathe new life into Dissent long after the deaths of its first editors and writers, who had always labored under heavier historical burdens.

Perhaps it should not seem strange that this tiny left-wing quarterly should be celebrating its sixtieth anniversary on Thursday night. Unlike less skeptical publications, Dissent never expected the socialist millennium to arrive in a blinding flash of light, so it had the stamina to outlast several generations of mirages and disillusionments. Unlike less committed publications, Dissent never lost sight of the vision of a better world, so it kept at the steady work long after others would have turned elsewhere with the shifting winds, or quit altogether.

Every generation produces young people with the kind of idealistic, undogmatic politics that animates Dissent. For this reason, its importance seems to me incommensurate with its size and even its readership. A modest utopia is still as necessary as bread and shelter, even today. Maybe today more than ever.

Link: In the Dark

Looking back at The X-Files on its 20th anniversary.

What I remember first about that year is the darkness of the nights. We would pile into a car and if we all had late enough curfews we would drive out of town, past the last light, on some country road we didn’t know the name of, fields and stars as far as we could see. When there was a lightning storm on the plains we’d drive toward it, watching moon-colored omens craze across the sky; otherwise light was what instinct led us to avoid. My friend had an ancient and indestructible Oldsmobile the color of a polluted lake and we would drive it as fast as we could down unlit alleyways and crash into other people’s trash cans. We would buy grape Slushes at Sonic and sneak into the park off Canterbury Avenue, across from the golf course, where we’d sit among the trees and tell each other stupid and wonderful things. We had been there as children hardly any time ago; now, in the dark, it was transformed. If you’ve ever been 17, and especially if you’ve ever been 17 in a small town, you’ve had your own year of dark nights. But when you are 17, and especially when you are 17 in a small town, you believe that there is opening before you a mysterious and uncharted realm that exists for you alone. You and your friends are conspirators in a shadow country.

I didn’t watch The X-Files, which premiered that fall, 20 years ago now, on September 10, 1993. I was wasting time at an advanced enough level not to need help from television. But The X-Files was there, in the background, for that year and for several years after it. In my memory of that time it seems to be running, muted, on every TV in every room I enter after dark. We are huddled around a phone trying to figure out whether there are such things as girls we might plausibly call, and in the other room we see the back of my friend’s mother’s head and Mulder’s and Scully’s faces staring out at us. Years later, when I watched the show in sequence, I never minded the incoherence of the main story line, which infuriated longtime fans, because I was already used to imagining the series as a montage of empty atmosphere, and in fact I had fallen half in love with it as such. The show’s cinematography, lush by today’s standards and astonishing in 1993, looked shadowed and moody, and because Scully’s expression was a striking combination of horror and numbness and bravery and trauma, none of which we had experienced and all of which we wanted to pretend we had experienced, nothing could have seemed more natural than that the show would move along the margins of our secret world. Although if you had asked me whether we were the border surrounding it or it was the border surrounding us, I would not have known the answer.

The names alone were thrilling — fittingly, since as I later learned no show was ever less eager to violate its characters’ anonymity. Mulder and Scully: Somehow they were both left-field and all-American, weird and out of time and stylish. They could have been in Bringing Up Baby or they could have been rock stars or they could have been murder victims in a film noir. (That year, I went to every old movie that played in our town’s converted vaudeville theater.) And they were deep, they were haunted with overtones. Mulder with its echoes of mull (to ponder) and molder (to decay, to turn to dust), and Scully with its obvious skull.1 Not watching the show, I still knew its major gimmicks, that the heroes were FBI agents investigating the paranormal, that Mulder was the intuitive one who believed in telepathy and aliens and Scully was the skeptical one who didn’t, and it resonated because something like that conflict was at work in our lives, too. If we made fun of The X-Files for the simplicity of its contrast between “belief” and “science,” it was because our own experience was just that simple, and because unlike Mulder and Scully we had no language wherein to discuss it.

I took pride in being furiously rational. At the same time I often felt that my sanity was a mirage and that with one second’s concentration I could dispel it forever, like smoke. Mulder and Scully argued about whether the craft that went down in the dark woods of Wisconsin was a UFO, while we drove at midnight to the old Robin Hood Flour plant by the train tracks, a looming tower of rusted cylinders deserted before we were born, and argued about whether to break in. Mulder and Scully uncovered monsters in the timberlands of Oregon and Virginia and Maine, while in Oklahoma we told stories about the murderous spirit who haunted the reservation in the form of a beautiful woman. Known as Deer Woman, she could run alongside vehicles on the highway and if you caught a flicker of movement and looked over into the next lane and made eye contact she would steal your soul, which sounds comical until the moment when you are painfully young and driving at night down a road with no other cars.

We had been lied to so often that we spent half our time seeing through lies, but inexplicable things still happened. We had been told not to understand things we understood, and at the same time we knew that there was more to the world than anyone was willing to tell us. The truth was somewhere, and perhaps it was mundane and perhaps it was magical, but then when you are 17 in a small town, magic is never very far out of reach.

And then there was the big thing, the one that was omnipresent in our town, the one The X-Files groped toward but never quite knew what to do with. When our Life Sciences class arrived at the unit on evolution, our teacher, who was also and primarily the wrestling coach, made it clear that he was continuing under protest. He held a piece of chalk as he said this, and stabbed mildly at the air with it.

We were all free, he said, “to not necessarily buy into what’s in the book.” Half the class nodded and looked grim. I remember with a vividness that makes my stomach drop chasing after a girl to a raft retreat in some hills three hours from town, my first experience with real evangelicals. My friend and I lost our way looking for the cabin and arrived in the middle of the night. There had already been some kind of bonfire and a sing-along with the youth leader’s guitar and now the teenagers were all spread out in the dark summer air, under humid masses of trees, communing with the Spirit. They each held an arm up, unsteady antennas. There was excitement when we appeared because an angel had come down to dance with one of the girls and we were the first audience for the story. No, they insisted, you couldn’t see the angel, but you could tell it was there, she wasn’t just reaching her arms out, she was holding on to something. I went rigid with contempt, at which point the youth leader, whose name was R.J., got out his guitar again and tried to win me over by sing-talking about Bono. I spent the night in a rough wooden bunk in a room with five or six earnest boys from farm towns, across the house from the girls, and if this had been an X-Files episode, if the roof had split open and the floodlights of a UFO pounded down on us, I’m not sure whether I or they would have felt more vindicated.

Of course what I didn’t know then was that The X-Files rigged its own central question, that the dichotomy of science vs. belief never resolved in favor of the former. Scully was always wrong, always, and most episodes let you know she was wrong before she even appeared onscreen, before she had a chance to speak. The liver-eating immortal bile-mutant would slither through the air conditioner shaft toward its victim in a shot whose objectivity was not tainted by the presence of a perspective character, and then we’d cut to FBI headquarters, where Mulder, in his Ambien-furred morning voice, was saying, “Hey, Scully, what if the killer’s some kind of bile-mutant,” and Scully would look stricken and respond with a theory about swamp gas or atmospheric contaminants, a theory so self-evidently lame that the viewer was not even expected to remember it. Scully’s wrongness and the show’s determination to see the paranormal everywhere unwittingly reversed the whole polarity of the series: It became clear before long that what Scully meant by “science” was not “the scientific method” or “testing hypotheses based on observable evidence” — an approach that would lead you to believe in ghosts by about the 30th time you saw one — but simply “the canon of currently accepted scientific knowledge,” which bizarrely became the show’s most tenuous article of faith.

But as I said, I found that out only later, after I’d left my hometown for good. And by that time I was already discovering how wrong I often was, too.

You could argue, and I would almost agree with you, that beneath all the obvious post-Watergate, post-JFK assassination government-conspiracy machinery, the real subject of The X-Files' stylized paranoia was the American city's anxiety toward small towns. The show out-noired noir by recognizing that the most extreme context for modern alienation was not the mean streets of the detective story but a white-collar bureaucracy that extended infinitely above the main protagonists — literally into space — and that threatened to control them without their knowing how or why. But Mulder and Scully spent most of their working hours, especially in the stand-alone “monster of the week” episodes that made up the bulk of the series, pursuing mysteries in Lake Okobogee, Iowa (where Ruby Morris was abducted by aliens in “Conduit”), or Delta Glen, Wisconsin (where the agents investigated a cult in “Red Museum”), or Miller's Grove, Massachusetts (where cockroaches attacked humans in “War of the Coprophages”). The strangeness and isolation of small towns was a theme the series returned to again and again, enough that Darin Morgan, the show's cleverest writer,2 could already subvert the concept by the second season, when, in “Humbug,” he sent Mulder and Scully to a town populated by circus freaks whose behavior was surprisingly normal.

In this show about not knowing, then, the agents confronted two distinct sets of frightening unknowns. On one side was the shadow government represented by the Cigarette-Smoking Man. On the other was the evil that lurked beneath the surface of every American hamlet. Often, Mulder and Scully’s role was simply to act as interpreters between their own antagonists, rendering chaotic eruptions of small-town horror comprehensible to men in marble corridors in D.C. Think of all the shots of the heroes in their oversize ’90s glasses laboring at their field reports, or again of all the shots of them cruising through a hostile rural enclave in businesslike topcoats and a sensible rented Buick.

The X-Files was probably the first great TV show to be galvanized by the Internet and the last great TV show to depict a world in which the Internet played no part. Its fan culture found a home online early in the series’ run, but though the role of computers became both more central and more realistic as the show progressed,3 it was possible at least through the fifth season or so to see the Web as a distraction, something with no important bearing on anyone’s life. Remember when you could turn it on and off? We often credit the Internet with the disintegration of the old American monoculture, because it liberated us to be absorbed by our own interests, to spend our time downloading obscure anime, say, rather than caring about Madonna or ABC. But the Internet also created a new type of monoculture: It made every place accessible to every other place. We could no longer assume that the peculiarities of our own environments were private. Our hometown murders might appear on CNN.com. The world of small-town X-Files episodes is still that older world of extreme locality, where everyone in town grows up knowing that the rules here are different and we handle it ourselves. Children vanish or trees kill people or bright lights appear in the sky, but there is no higher authority to appeal to and it has nothing to do with what goes on 10 miles down the road. In my hometown we knew that the spillway by the lake was where you painted a memorial if your friend was killed in a drunk-driving crash. It’s the same thing. Here is here. And this, it goes without saying, is just the opposite of the here-is-everywhere world inhabited by the conspiracy, which is global in scale, utterly connected, and ruled by pseudonymous men whose flat-affect, no-eye-contact meetings were almost the personification of a chat window.4

The small-town grotesques in the series lived with secrets. The Syndicate curated them. Almost more than belief and science, the sustaining tension in The X-Files is between two manifestations of the American psyche, one fading and the other just taking form, as they encounter one another for the first time and recoil in horror.

Link: A Nation’s State

Japan’s tormented relationship with its modernity

Tokyo these days looks like Asia’s oldest metropolis—at least to those accustomed to the shinier buildings, grander avenues, and the more garish newness of Shanghai. Compared to the upstart countries of Asia today, much of Japan presents a spectacle of aged modernity: brown plains marked by a clutter of small houses, and crisscrossed by giant power pylons. Even the wild beauty of the country’s coastal areas is now touched, after the nuclear catastrophe at Fukushima, with menace. And it is with some shock that you recall that Japan was where once the future lay, before its bubble burst in the early 1990s, and the country, pushed inward by adversity, became a strange absence in our lives.

While Japan languished within a low-growth economy, its poor cousin of the 20th century, China, unexpectedly became Asia’s pre-eminent economic power, and its old domineering mentor, the United States, suffered a severe economic and geopolitical diminishment. Now, insecurity caused by the rise of China, and America’s growing inwardness, is driving neo-nationalists in Japan to risky geopolitical and economic experiments. China, in turn, seems fully committed to anti-Japanese nationalism—violent demonstrations, often abetted by the communist regime, erupted in 2005, 2010 and 2012.

Japan’s new prime minister, Abe Shinzo, has been promoting an ambitious plan of national renaissance, which looks, particularly to the country’s alarmed neighbours, like revanchism. Though well below the highs of the early 1990s, the stock market has responded keenly to ‘Abenomics’, his strategy to kick-start the Japanese economy, which combines devaluation of the yen with increased public spending on infrastructure, and aggressive quantitative easing. Emboldened by early success, the prime minister, a conservative nationalist, assures his rapidly ageing electorate that “Japan is back.”

A range of opinions, from Joseph Stiglitz to the Financial Times and The Economist seem to agree. Certainly, after several faces blurred by revolving doors, Japan has a leader who commands international name recognition. Interviewed gingerly in Foreign Affairs about his bold policies, Abe has been hectically touring one Asian country after another, drumming up business for Japan—and support against China—in countries as different as Mongolia and Myanmar. Undaunted by the disaster in 2011 at Fukushima, where an earthquake and tsunami devastated a nuclear plant, Abe signed a $22 billion deal in May to build nuclear plants in seismically active Turkey. India’s poor safety record has not slowed down his pursuit of a lucrative nuclear deal with the Manmohan Singh government. Reports in early August of fresh radiation leaks from Fukushima, the world’s worst nuclear crisis since Chernobyl, did not seem to have dented his confidence; if anything, it was instead boosted further this September by Tokyo’s successful bid for the 2020 Olympics.

Actually, Abe’s promise of Japan’s return has begun to seem a bit minatory. It is also less than clear whether, as Abe desires, Japan will ever abandon its status as a great pacifist power, assume leadership of an anti-Chinese coalition, or become economically regnant again. In any case, it seems unlikely, to take a matter closer to my heart, that Sony and Panasonic could ever regain the lead in consumer electronics that they have lost to Apple and Samsung. It was, after all, Japan’s consumer products, its simple conveniences, gizmos and gadgets that once tantalised many of us in India. After the war, Japanese consumers had moved quickly from cherishing the “three treasures” of domestic living in the 1950s—black-and-white television, fridge and washing machine—to coveting higher things in the 1970s: an air-conditioner, a car and colour TV. The rest of us were decades away from attaining this holy trinity of consumer capitalism. In the 1970s and 1980s, Japan had been, for many middle-class Indians, synonymous with Canon cameras, JVC stereos, and Sony televisions, which the luckier among us hauled back, in flimsy cardboard boxes secured with nylon strings, from trips to duty-free utopias in the Persian Gulf. Suzuki was the respectable other half of Maruti, rescuing the “People’s Car” from the fantasies of the maladroit dynast Sanjay Gandhi; Japanese expertise with Hero Honda motorcycles also helped fuel social mobility in the 1980s among the lower middle class.

Japan’s economic heft was then provoking fresh hallucinations of the Yellow Peril in the United States, and Japan itself was about to reach the limits of its peculiar model of cartelised capitalism. But Japanese philanthropy in Bihar’s antique Buddhist heart, and tourists garlanded with expensive cameras in Varanasi and Agra, spoke of the lone Asian nation that had miraculously conquered poverty, and achieved high literacy and long life expectancy.

VISITING JAPAN THIS YEAR, however, I felt pulled back in time. I had over-prepared, in a way, for this trip, reading widely, and seeking out authorities on the country, for several years. Still, I was surprised and often baffled by its isolationism, over-regulated economic regime, monopolies and inefficiencies—visitors will find it easier, for instance, to procure a data connection on their smartphone in Laos than in Japan, and a SIM card for voice calls is simply unobtainable.

The Japanese were still rich. But why did their houses look so flimsy, their supermarkets so poorly stocked, and their public architecture so unprepossessing? As early as the 1920s, Japan was introduced to the material culture of capitalism, and its attendant phenomena: the consumption of cars, radio, films, magazines, the rise of the nuclear family, and the commercially motivated exaltation of youth and romantic love, and Western mores; it was also then that a popular culture grew around the new urban middle class, featuring the ubiquitous so-called salaryman (sarariman) and the hard-working white-collar women—moga, or modern girls, who were, in the overheated Japanese male imagination, as prone to retail kisses as Western clothes.

But Japan’s modernity, famously encrypted in neon after the war, seemed to have visibly stalled in the 1970s and 1980s. In Tokyo, the decades had petrified into buildings of startling ugliness and vulgarity, given an interesting weirdness only by the heavy fluorescence of the evening, and by young men and women with stylised haircuts. The uniformly gray commuters added to the strange impression of sameness and exclusivity in a city that remains defiantly non-multicultural in the age of globalisation, where very few people speak English or look foreign; the Pakistani I met running an Indian restaurant looked subdued by his alienness as much as by his subterfuge.

There were many signs of a still impressive physical and social infrastructure, such as the serenely swift Shinkansen railway, matched by the quick courtesy and cheerful goodwill of ordinary Japanese. The temples, shrines and gardens of Kyoto and Kanazawa rapidly eradicated all fear of disappointment; and even the careful partitions of bento boxes spoke of an unatrophied aesthetic sensibility. But there was no avoiding the sense of a long malaise, the product of two previous ‘lost’ decades, during which pachinko, a form of pinball, had become one of Japan’s biggest industries.

To be in Japan was to see how the intimations of decay had deepened despite its flourishing soft-power exports worldwide, of manga and anime, and the insistent chirpiness of Pokemon and Hello Kitty. The human toll of the slow economic implosion showed in the statistics about suicide (one every 15 minutes), child abuse (a fourfold increase since 1999) and rising domestic violence, and in the stories in the press about empty rooms where salaried employees with no work were asked to spend their day until they resigned. An estimated one million Japanese people almost never leave the house. Many of those that bother probably do so in order to indulge in the otaku subcultures of obsessively idle young men.

The political consequences of the long economic winter were manifest, I found, in the aggressive self-pity and sanctimoniousness of the neo-nationalists I met. They reminded me of the retailers of Hindutva in the 1980s: the same revisionist energies, invocations of the “national spirit”, claims to extended victimhood and the attempt to mask, with a bogus cultural unity, the inconvenient facts of poverty, inequality, environmental degradation, and social discrimination. The aged were everywhere, as befitting a country with a swiftly declining population, the packed subways and tiny restaurants as though designed for their small frames. The youth, deprived of the stable jobs their parents had, and languishing in cafés with their smartphones or po-facedly working up a racket at pachinko parlours, reminded one that Japan had pioneered what the art historian TJ Clark calls “the essence of modernity”, which, “from the scripture-reading spice merchant to the Harvard iPod banker sweating in the gym, is a new kind of isolate obedient ‘individual’ with technical support to match.”

Link: The Shadow of Ikea-ification Falls On Us All

A recent exhibition, The Whole Earth: California and the Disappearance of the Outside, interrogated what pale blue fragments lie in the wake of the whole earth’s broken promise. Review by Hannah Black

Images of the moon landing in 1969 show American astronauts taking giant steps on new territory. The footage is at once haunting and banal, US colonial and purportedly global, an impossible multiple charge that must be one of the reasons for the psychically protective conspiracy theory that it was all a studio set fake. On the way back from a visit to Diedrich Diedrichsen and Anselm Franke’s encyclopaedic exhibition The Whole Earth: California and the Disappearance of the Outside at the Haus der Kulteren der Welt in Berlin, I saw a Red Bull ad featuring contemporary space icon Felix Baumgartner, suspended above the earth, about to begin his pointless descent. Where the astronauts ascended, a triumph of western capitalist ingenuity, Baumgartner falls straight down, a single vulnerable body, the melancholic Fordist opening credits of Mad Men played out on a cosmic plane. The main pleasure of watching his fall was not the triumph of science, but its possible failure: the chance that something might have gone fatally wrong.

The Whole Earth, which closed at the beginning of July, was the second big show of HKW’s Anthropocene series, a high concept programme situating the present conjuncture in the long history of the earth: the contemporary moment as the final domination of second nature over first. Scientist Paul Crutzen proposed the term ‘anthropocene’ to indicate a geological age of the human. HKW’s use of the concept exudes institutional confidence, fusing an invocation of contemporaneity, the long sweep of geological time, and the bonus that it excludes absolutely nothing. More problematically, the idea of the anthropocene, like much eco-discourse, is full of hidden fissures: there is, in reality, no unified humanity that confronts nature as one equally guilty and equally implicated global subject. The neutral scientific phrase ‘human behaviour’ stands in for the rapacity of capitalism, naturalising exploitation and ignoring how the causes and effects of ecological change are split along the faultlines of race, gender and class.

Diedrichsen and Franke’s show is a corrective intervention into this analysis free analysis, critiquing its implied Eden (the intact earth) via an anti-history of pop culture. The show’s premise is the famous NASA image of the globe released in 1968, the blue and white sphere from which Neil Armstrong and colleagues departed and towards which Baumgartner plummets, a telos – from corporate ascent to individual descent – that forms a similar arc to that described by the show’s main argument. The curators interrogate the ‘blue planet’ image as a false holism, an ideological insistence on an indivisible global mankind, and a narcissistic involution of perspective. Like Adam Curtis in his series All Watched Over By Machines of Loving Grace (excerpted in the show, which, like Curtis, draws extensively on Fred Turner’s book From Counterculture to Cyberculture), the curators pin some of the blame on Stewart Brand, the cybernicist hippie who campaigned for the image’s release and displayed it on the cover of his famous Whole Earth Catalog. This was a guide to ‘sustainable living’, including a mixture of instructions about farming, crafts and so on, and advertisements for the necessary equipment. Both the terminology and the ethos exemplified by the Catalog – ‘sustainability’, ‘innovation’ and ‘creativity’ – have been enthusiastically taken up by current forms of capitalism. The now familiar argument that California hippie culture was not co-opted by neoliberalism, but rather was neoliberalism’s crucible, is evident here, but Diedrichsen and Franke grapple with this ossified counterculture in great detail, exploring its ambiguities and gaps as well as marking its seamless transition into the corporate.

The curators also emphasise the Catalog’s credentials as a kind of precursor to the internet, following the development of the California tendency through into the development of the personal computer and the internet. Brand’s faith in the ‘whole earth’ ideology bears the mark of the hippie obsession with connection, a thread that runs through the show, taking in the Californian turn towards Eastern mysticism as well as technologies of connection such as the personal computer and the internet, which Brand helped to facilitate and popularise. As Diedrichsen and Franke make clear, the hippie discourse of communality lacks any real analysis of capitalism, and uncritically supports togetherness as always already a good thing. Yes, we are all connected, not by cosmic vibrations but by value in motion, and some of us might want less rather than more of this unwilled connection. Among other uneasy connections, the show traces the multiple readings of outer space as nationalist expansion, frontier, global unity and an Afro-futurist repudiation of a racist earth. ‘We have returned to claim the pyramids’, as George Clinton announces in Parliament’s song ‘Mothership Connection (Starchild)’; less lyrically and from an earthbound perspective, Richard Pryor’s roughly contemporaneous joke expresses a similar antagonism: ‘Let’s help those white motherfuckers get to the moon, so they leave us alone.’

Diedrichsen and Franke’s argument unfolds from the centre of the space, where a series of panels discusses the ‘whole earth’ in terms of ’60s anti-state protests, including the Chicago uprisings of 1968 when protesters chanted the slogan ‘the whole world is watching’ at violent police. The slogan is juxtaposed with the Grateful Dead’s description of the lives they soundtracked, ‘You are the eyes of the world.’ This moral surveillance, exemplified in the ‘whole earth’ perspective, converges with the surveillance technologies deployed by the state. Stewart Brand thought that images of the whole prompted eco-awareness by making it clear that the earth is not an infinitely resourced flat plane but instead a compact sphere; at the same time as the earth was apparently offered to all, its borders were sealed shut: planet as panopticon. What is already paradise becomes already prison, in the same moment. Right behind this section of the show, another super-compressed history rifles through images of trashed Hiroshima and briefly describes how Nazi technological innovations were later deployed by the US. The NASA earth image supersedes the mushroom cloud; an image of total destruction is supplanted by a holistic image of total creation. In this reading, the globe picture contains and represses the mushroom cloud, itself an impossible metonym supporting something essentially unfigurable. ‘No image is capable of representing the evil of the Shoah,’ emphasise Diedrichsen and Franke. And yet the (American) protest generation of the 1960s arose from the ashes of the war brandishing NASA’s blue ball as the sigil of an intact world, already complicit with a violent and deliberate forgetfulness that would flower into (among other things) the Cold War. The detail that the first images of space were produced by Nazi V2 rockets is here just a grim flourish.

This breathlessly dense argument is supported with video, text, images and so on, arranged on cheap looking display modules consisting of black tubing, card and trailing wires. The flatpack aesthetic echoes another of the show’s many contentions, that models of ‘self-esteem’ invented and developed alongside systems theory present a monadic and endlessly perfectible persona occupying its own mini-universe: ‘the actualised, emancipated, admired, narcissistically spiced up self has become,’ says a tranche of typically voluble text, ‘an ingredient in many of the intangible products and forms that…play an important role in the post-industrial economy.’ The shadow of Ikea-ification falls on us all. Here it is materially evident not only in the bolted together furniture but in the vast array of ideas and works on show; visits to the show eat up hours as you trail around what feels like an endless industrial hangar, throwing what you can into the trolley: mushroom clouds, cybernetics, dolphins, Larry David, Jefferson Airplane, Parliament, Bob Marley, desert, ocean, outer space… and more… . Perhaps the only reason this show isn’t just a book is that if it were a book it would have to be the internet.

Link: Learning How to Live

Link: The Resentment Machine

The immiseration of the digital creative class.

The popular adoption of the internet has brought with it great changes. One of the peculiar aspects of this particular revolution is that it has been historicized in real time—reported accurately, greatly exaggerated, or outright invented, often by those who have embraced the technology most fully. As impressive as the various changes wrought by the exponential growth of internet users were, they never seemed quite impressive enough for those who trumpeted them.

In a strange type of autoethnography, those most taken with the internet of the late 1990s and early 2000s spent a considerable amount of their time online talking about what it meant that they were online. In straightforwardly self-aggrandizing narratives, the most dedicated and involved internet users began crafting a pocket mythology of the new reality. Rather than regarding themselves as tech consumers, the most dedicated internet users spoke instead of revolution. Vast, life-altering consequences were predicted for these rising technologies. In much the same way as those speaking about the importance of New York City are often actually speaking about the importance of themselves, so those who crafted the oral history of the internet were often really talking about their own revolutionary potential. Not that this was without benefits; self-obsession became a vehicle for an intricate literature on emergent online technology.

Yet for all the endless consideration of the rise of the digitally connected human species, one of the most important aspects of internet culture has gone largely unnoticed. The internet has provided tremendous functionality, for facilitating commerce, communication, research, entertainment, and more. Yet for a comparatively small but influential group of its most dedicated users, its most important feature, the killer app, is its power as an all-purpose sorting mechanism, one that separates the worthy from the unworthy—and in doing so, gives some meager semblance of purpose to generations whose lives are largely defined by purposelessness. For the postcollegiate, culturally savvy tastemakers who exert such disproportionate influence over online experience, the internet is above and beyond all else a resentment machine.

The modern American “meritocracy,” the education/employment vehicle, prepares thousands of upwardly mobile young strivers for everything but the life they will actually encounter. The endlessly grinding wheel of American “success” indoctrinates young people with a competitive vision that most of them never escape. The numbing and frenetic socioacademic sorting mechanism compels most of the best and the brightest adolescents in our middle and upper class to compete for various laurels from puberty to adulthood. School elections, high school and college athletics, honors societies, finals clubs, dining clubs, the subtler (but no less real) social competitions—all make competition the natural habitus of American youth. Every aspect of young adult life is transformed into a status game, as academics, athletics, music and the arts, travel, hobbies, and philanthropy are all reduced to fodder for college applications.

This instrumentalizing of all of the best things in life teaches teenagers the unmistakable lesson that nothing is to be enjoyed, nothing experienced purely, but rather that each and every part of human life is ultimately subservient to what is less human. Competition exists as a vehicle to provide the goods, material or immaterial, that make life enjoyable. The context of endless competition makes that means into an end itself. The eventual eats the immediate. No achievement, no effort, no relationship can exist as an end in itself. Each must be ground into chum to attract those who confer status and success—elite colleges and their representatives, employers.

As has been documented endlessly, this process starts earlier and earlier in life, with elite preschools now requiring that students pass tests and get references, before they can read or write. Many have lamented the rise of competition and gatekeeping in young children. Little attention has been paid to what comes after the competitions end.

It is, of course, possible to keep running on the wheel indefinitely. There are those professions (think: finance) that extend the status contests of childhood and adolescence into the gray years, and to one degree or another, most people play some version of this game for most of their lives. But for a large chunk of the striving class, this kind of naked careerism and straightforward neediness won’t do. Though they have been raised to compete and endlessly conditioned to measure themselves against their peers, they have done so in an environment that denies this reality while it creates it. Many were raised by self-consciously creative parents who wished for children who were similarly creative, in ethos if not in practice. These parents operated in a context that told them to nurture unique and beautiful butterflies while simultaneously reminding them, in that incessant subconscious way that is the secret strength of capitalism, that their job as parents is to raise their children to win. The conversion of the hippies into the yuppies has been documented endlessly by pop sociologists like David Brooks. What made this transformation palatable to many of those being transformed was the way in which materialist striving was wedded to the hippie’s interest in culture, art, and a vague “nonconformist” attitude.

It is no surprise that the urge to rear winners trumps the urge to raise artists. But the nagging drive to preach the value of culture does not go unnoticed. The urge to create, to live with an aesthetic sense, is admirable, and if inculcated genuinely—which is to say, in defiant opposition to the competitive urge rather than as an uneasy partner to it—this romantic artistic vision of life remains the best hope for humanity against the deadening drift of late capitalism. Only to create for the sake of creation, to build something truly your own for no purpose and in reference to the work of no other person—perhaps there’s a chance for grace there.

But in context of the alternative, a cheery and false vision of the artistic life, self-conscious creativity becomes sublimated into the competitive project and becomes twisted. Those raised with such contradictory impulses are left unable to contemplate the stocks-and-suspenders lifestyle that is the purest manifestation of the competitive instinct, but they are equally unable to cast off the social-climbing aspirations that this lifestyle represents. Their parentage and their culture teach them to at once hunger for the material goods that are the spoils of a small set of professions, but at the same time they distrust the culture of those self-same professions.  They are trapped between their rejection of the means and an unchosen but deep hunger for the ends.

Momentum can be a cruel thing. High school culminates in college acceptance. This temporary victory can often be hollow, but the fast pace of life quickly leaves no time to reckon with that emptiness. As dehumanizing and vulgar as the high-school glass-bead game is, it certainly provides adolescents with a kind of order. That the system is inherently biased and riotously unfair is ultimately besides the point. In the many explicit ways in which high-school students are ranked emerges a broad consensus: There is an order to life, that order indicates value, and there are winners and losers.

Competition is propulsive and thus results in inertia. College students enjoy a variety of tools to continue to manage the competitive urge. Some find in the exclusive activities, clubs, and societies of elite colleges an acceptable continuation of high-school competition. Others never abandon their zeal for academic excellence and the laurels of high grades and instructor approval. Some pursue medical school, law school, an MBA, or (for the truly damned) a PhD. But most dull the urge by persisting in a four-or-five-year fugue of alcohol, friendship, and rarefied living.

The end of college brings an end to that order, and for many, this is bewildering. Educated but broadly ignorant of suffering, scattershot in their passions, possessed of verbal dexterity but bereft of the experience that might give their words meaning, culturally sensitive 20-somethings wander into a world that is supposed to be made for them, and find it inhospitable. Without the rigid ordering that grades, class rank, leadership, and office provide, the incessant and unnamed urge to compete cannot be easily addressed. Their vague cultural liberalism—a dedication to tolerance and egalitarianism in generally vague and deracinated terms—makes the careers that promise similar sorting unpalatable. The economic resentment and petty greed that they have had bred into them by the sputtering machine of American capitalism makes lower-class life unthinkable.

Driven by the primacy of the competitive urge and convinced that they need far more material goods than they do to live a comfortable life, they seek well-paying jobs. Most of them will find some gainful employment without great difficulty. Perhaps this is changing: As the tires on the Trans Am that is America go bald, their horror at a poor job market reveals their entitlement more than anything. But the numbers indicate that most still find their way into jobs that become careers. Many will have periods of arty unemployed urbanism, but after awhile the gremlin begins whispering, “You are a loser,” and suddenly, they’re placing that call to Joel from Sociology 205 who’s got that connection at that office. Often, these office jobs will enjoy the cover of orbiting in some vaguely creative endeavor like advertising. One way or the other, these jobs become careers in the loaded sense. In these careers, they find themselves in precisely the position that they long insisted they would never contemplate.

The competitive urge still pulses. It has to; the culture in which students have been raised has denied them any other framework with which to draw meaning. The world has assimilated the rejection of religion, tradition, and other determinants of virtue that attended the 1960s and wedded it to a vicious contempt for the political commitments that replaced them in that context. Culture preempts the kind of conscious understanding that attends to conviction, that all traditional designations of meaning are uncool.

If straightforward discussion of virtue and righteousness is socially unpalatable, straightforward political engagement appears worse still. Pushed by an advertising industry that embraces tropes of meaning just long enough to render them meaningless (Budweiser Clydesdales saluting fallen towers) and buffeted by arbiters of hipness that declare any unapologetic embrace of political ideology horribly cliché, a fussy specificity envelops every definition of the self. Conventional accounts of the kids these days tend to revert to tired tropes about disaffection and irony. The reality is sadder: They are not passionless, but many have invested their passion in a shared cultural knowledge that denies the value of any other endeavor worthy of personal investment.

Contemporary strivers lack the tools with which people in the past have differentiated themselves from their peers: They live in a post-virtue, post-religion, post-aristocracy age. They lack the skills or inspiration to create something of genuine worth. They have been conditioned to find all but the most conventional and compromised politics worthy of contempt. They are denied even the cold comfort of identification with career, as they cope with the deadening tedium and meaninglessness of work by calling attention to it over and over again, as if acknowledging it somehow elevates them above it.

Into this vacuum comes a relief that is profoundly rational in context—the self as consumer and critic. Given the emptiness of the material conditions of their lives, the formerly manic competitors must come to invest the cultural goods they consume with great meaning. Meaning must be made somewhere; no one will countenance standing for nothing. So the poor proxy of media and cultural consumption comes to define the individual. In many ways, cultural products such as movies, music, clothes, and media are the perfect vehicle for the endless division of people into strata of knowingness, savvy, and cultural value.

These cultural products have no quantifiable value, yet their relative value is fiercely debated as if some such quantifiable understanding could be reached. They are easily mined for ancillary content, the TV recaps and record reviews and endless fulminating in comments and forums that spread like weeds. (Does anyone who watches Mad Men not blog about it?) They are bound up with celebrity, both real and petty. They can inspire and so trick us into believing that our reactions are similarly worthy of inspiration. And they are complex and varied enough that there is always more to know and more rarefied territory to reach, the better to climb the ladder one rung higher than the person the next desk over.

There is a problem, though. The value-through-what-is-consumed is entirely illusory. There is no there there. This is what you can really learn about a person by understanding his or her cultural consumption, the movies, music, fashion, media, and assorted other socially inflected ephemera: nothing. Absolutely nothing. The internet writ large is desperately invested in the idea that liking, say, The Wire, says something of depth and importance about the liker, and certainly that the preference for this show to CSI tells everything.

Likewise, the internet exists to perpetuate the idea that there is some meaningful difference between fans of this band or that, of Android or Apple, or that there is a Slate lifestyle and a This Recording lifestyle and one for Gawker or The Hairpin or wherever. Not a word of it is true. There are no Apple people. Buying an iPad does nothing to delineate you from anyone else. Nothing separates a Budweiser man from a microbrew guy. That our society insists that there are differences here is only our longest con.

This endless posturing, pregnant with anxiety and roiling with class resentment, ultimately pleases no one. Yet this emptiness doesn’t compel people to turn away from the sorting mechanism. Instead, it draws them further and further in. Faced with the failure of their cultural affinities to define an authentic and fulfilling self, postcollegiate middle-class upwardly-oriented-if-not-upwardly-mobile Americans double down on the importance of these affinities and confront the continued failure with a formless resentment. The bitterness that surrounds these distinctions is a product of their inability to actually make us distinct.

The savviest of the media and culture websites tap into this resentment as directly as they dare. They write endlessly about what is overrated. They assign specific and damning personality traits to the fan bases of unworthy cultural objects. They invite comments that tediously parse microscopic distinctions in cultural consumption. They engage in criticism as a kind of preemptive strike against those who actually create. They glamorize pettiness in aesthetic taste. The few artistic works they lionize are praised to the point of absurdity, as various acolytes try to outdo each other in hyperbole. They relentlessly push the central narrative that their readers crave, that consumption is achievement and that creators are to be distrusted and “put in their place.” They deny the frequently sad but inescapable reality that consumption is not creation and that only the genuinely creative act can reveal the self.

This, then, is the role of the resentment machine: to amplify meaningless differences and assign to them vast importance for the quality of individuals. For those who are writing the most prominent parts of the internet—the bloggers, the trendsetters, the über-Tweeters, the tastemakers, the linkers, the creators of memes and online norms—online life is taking the place of the creation of the self, and doing so poorly.

This all sounds quite critical, I’m sure, but ultimately, this is a critique I include myself in. For this to approach real criticism I would have to offer an alternative to those trapped in the idea of the consumer as self. I haven’t got one. Our system has relentlessly denied the role of any human practice that cannot be monetized. The capitalist apparatus has worked tirelessly to commercialize everything, to reduce every aspect of human life to currency exchange. In such a context, there is little hope for the survival of the fully realized self.

Link: The Melancholy of Subculture Society

If you crack open some of the mustier books about the Internet—you know the ones I’m talking about, the ones which invoke Roland Barthes and discuss the sexual transgressing of MUDs—one of the few still relevant criticisms is the concern that the Internet by uniting small groups will divide larger ones.

Surfing alone

You may remember this as the Bowling Alone thesis applied to the Internet; it got some traction in the late 1990s. The basic idea is: electronic entertainment devices grows in sophistication and inexpensiveness as the years pass, until by the 1980s and 1990s, they have spread across the globe and have devoured multiple generations of children; these devices are more pernicious than traditional geeky fares inasmuch as they are often best pursued solo. Spending months mastering Super Mario Brosall alone—is a bad way to grow up normal.

And then there were none

The 4 or 5 person Dungeons & Dragons party (with a dungeon master) gives way to the classic arcade with its heated duels and oneupsmanship; the arcade gives way to the flickering console in the bedroom with one playing Final Fantasy VIIalone. The increased graphical realism, the more ergonomic controllers, the introduction of genuinely challenging AI techniques… Trend after trend was rendering a human opponent unnecessary. And gamer after gamer was now playing alone.

Perhaps, the critic says, the rise of the Internet has ameliorated that distressing trend—the trends favored no connectivity at first, but then there was finally enough surplus computing power and bandwidth for massive connectivity to become the order of the day.

It is much more satisfactory and social to play MMORPGs on your PC than single-player RPGS, much more satisfactory to kill human players in Halo matches than alien AIs. The machines finally connect humans to humans, not human to machine. We’re forced to learn some basic social skills, to maintain some connections. We’re no longer retreating into our little cocoons, interacting with no humans.

Welcome to the N.H.K.!

But, the critic continues, things still are not well. We are still alienated from one another. The rise of the connected machines still facilitates withdrawal and isolation. It presents the specter of the hikikomorithe person who ceases to exist in the physical realm as much as possible. It is a Japanese term, of course. They are 5 years further in our future than we are (or perhaps one should say, were). Gibson writes, back in 2001:

The Japanese seem to the rest of us to live several measurable clicks down the time line. The Japanese are the ultimate Early Adopters, and the sort of fiction I write behooves me to pay serious heed to that. If you believe, as I do, that all cultural change is essentially technologically driven, you pay attention to the Japanese. They’ve been doing it for more than a century now, and they really do have a head start on the rest of us, if only in terms of what we used to call future shock (but which is now simply the one constant in all our lives).

Gibson also discusses the Mobile Girl and text messaging; that culture began really showing up in America around 2005—SidekicksTwitter etc. You can do anything with a cellphone: order food, do your job, read & write novels, maintain a lively social life, engage in social status envy (She has a smaller phone, and a larger collection of collectibles on her cellphone strap! OMG!)… Which is just another way of saying You can do anything without seeing people, just by writing digital messages. (And this in a country with one of the most undigitizable writing systems in existence!)

The hikikomori withdraws from all personal contact. The hikikomori does not hang out at the local pub, swilling down the brewskis as everyone cheers on the home team. The hikikomori is not gossiping at the rotary club nor with the Lions or mummers or Veterans or Knights. hikikomoris do none of that. They aren’t working, they aren’t hanging out with friends.

The Paradoxical solitude and omnipotence of the otaku, the new century’s ultimate enthusiast: the glory and terror inherent of the absolute narrowing of personal bandwidth. —William Gibson, Shiny balls of Mud (TATE 2002)

So what are they doing with their 16 waking hours a day?

Opting out

But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing…the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode. We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way up close and personal profiles of professional athletes strain so hard to find evidence of a rounded human life—outside interests and activities, values beyond the sport. We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small…[Tennis player Michael] Joyce is, in other words, a complete man, though in a grotesquely limited way…Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well. — David Foster WallaceThe String Theory (July 1996 Esquire)

They’re not preoccupied with our culture - they’re participating in their own subculture. It’s the natural progression of the otaku. They are fighting on Azeroth, or fiercely pursuing their dojinshi career, or… There are many subcultures linked and united by the Internet, for good and ill. For every charitable or benevolent subculture (eg. free software) there is one of mixed benefits (World of Warcraft), and one outright harmful (ex. fans of eating disorders, child pornography).

The point the critic wants to make is that life is short and a zero-sum game. You lose a third of the day to sleep, another third to making a living, and now you’ve little left. To be really productive, you can’t divide your energies across multiple cultures—you can’t be truly successful in mainstream culture, and at the same time be able to devote enough effort in the field of, say, mechanical models, to be called an Otaking. A straddler takes onto his head the overhead of learning and participating in both, and receives no benefits (he will suffer socially in the esteem of the normals, and will be able to achieve little in his hobby due to lack of time and a desire to not go overboard).

The otaku & hikikomori recognizes this dilemma and he chooses—to reject normal life! He rejects life in the larger culture for his subculture. It’s a simple matter of comparative advantage; it’s easier to be a big fish in a small pond than in a large one.

The bigger screen

Have you ever woken up from a dream that was so much more pleasant than real life that you wish you could fall back to sleep and return to the dream?…For some, World of Warcraft is like a dream they don’t have to wake up from - a world better than the real world because their efforts are actually rewarded –[Half Sigma, Status, masturbation, wasted time, and WoW]

EVE Online is unique in gaming in that we have always played on the same massive server in the same online universe since May 2003 when it first went live. We not only understand the harsh penalties for failure, but also how longevity and persistence is rewarded with success. When you have over 60,000 people on weekends dealing, scheming, and shooting each other it attracts a certain type of gamer. It’s not a quick fix kind of game. We enjoy building things that last, be they virtual spaceships or real life friendships that together translate into massive Empires and enduring legacies. Those of us who play understand that one man really can truly make a difference in our world. –Mark Seleene HeardVile Rat eulogy 2012

As ever more opt out, the larger culture is damaged. The culture begins to fragment back into pieces. The disconnect can be profound; an American anime geek has more in common with a Japanese anime geek (who is of a different ethnicity, a different culture, a different religion, a different language…) than he does with an American involved in the evangelical Christian subculture. There is essentially no common ground - our 2 countrymen probably can’t even agree on objective matters like governance or evolution!

With enough of these gaps, where is American or French culture? Such cultural identities take centuries to coalesce—France did not speak French until the 1900s (as The Discovery of France recounts), and Han China is still digesting and assimilating its many minorities and outlying regions. America, of course, had it relatively easy in starting with a small founder population which could just exterminate the natives.

The national identity fragments under the assault of burgeoning subcultures. At last, the critic beholds the natural endpoint of this process: the nation is some lines on a map, some laws you follow. No one particularly cares about it. The geek thinks, Meh: here, Canada, London, Japan, Singapore—as long as FedEx can reach me and there’s a good Internet connection, what’s the difference? (Nor are the technically-inclined alone in this.)

You can test this yourself. Tell yourself—the country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled. If your mental reply goes something like, Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest…, then you are headed down the path of opting out.

This is how the paradox works: the Internet breaks the larger culture by letting members flee to smaller subcultures. And the critics think this is bad. They like the broader culture, and deep down, furries and latex fetishists really bother them. They just plain don’t like those weirdo deviants.

But I can get a higher score!

In the future, everyone will be world-famous for 15 minutes.

Let’s look at another angle.

Monoculture

Irony has only emergency use. Carried over time, it is the voice of the trapped who have come to enjoy their cage.

One can’t opt out of culture. There is no view from nowhere. To a great extent, we are our cultural artifacts—our possessions, our complexes of memes, our habits and objects of disgust are all cultural. You are always part of a culture.

Suppose there were only 1 worldwide culture, with no subcultures. The overriding obsession of this culture will be… let’s make it money. People are absolutely obsessed with money—how it is made, acquired, degraded, etc. More importantly, status is defined just by how much you have earned in your life; in practice, tie-breakers include how fast you made it, what circumstances you made it in (everyone admires a person who became a billionaire in a depression more than a good-times billionaire, in the same way we admire the novelist in the freezing garret more than the comfortable academic), and so on.

This isn’t too absurd a scenario: subjects feed on themselves and develop details and complexity as effort is invested in them. Money could well absorb the collective efforts of 7 billion people—already many people act just this way.

But what effect does this have on people? I can tell you: the average person is going to be miserable. If everyone genuinely buys into this culture, then they have to be. Their talents at piano playing, or cooking, or programming, or any form of artistry or scholarly pursuit are denigrated and count for naught. The world has become too big - it did not use to be so big, people so powerless of what is going on:

"Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole.

Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….”

You can’t make a mark on it unless there are almost as many ways to make marks as there are persons.

To put it another way: women suffer enough from comparing themselves to media images. If you want a vision of this future, imagine everyone being an anorexic teenager who hates her body—forever.

We all value social esteem. We need to know somebody thinks well of us. We’re tribal monkeys; ostracism means death.

Jaron Lanier: I’d like to hypothesize one civilizing force, which is the perception of multiple overlapping hierarchies of status. I’ve observed this to be helpful in work dealing with rehabilitating gang members in Oakland. When there are multiple overlapping hierarchies of status there is more of a chance of people not fighting their superior within the status chain. And the more severe the imposition of the single hierarchy in people’s lives, the more likely they are to engage in conflict with one another. Part of America’s success is the confusion factor of understanding how to assess somebody’s status.

Steven Pinker: That’s a profound observation. There are studies showing that violence is more common when people are confined to one pecking order, and all of their social worth depends on where they are in that hierarchy, whereas if they belong to multiple overlapping groups, they can always seek affirmations of worth elsewhere. For example, if I do something stupid when I’m driving, and someone gives me the finger and calls me an asshole, it’s not the end of the world: I think to myself, I’m a tenured professor at Harvard. On the other hand, if status among men in the street was my only source of worth in life, I might have road rage and pull out a gun. Modernity comprises a lot of things, and it’s hard to tease them apart. But I suspect that when you’re not confined to a village or a clan, and you can seek your fortunes in a wide world, that is a pacifying force for exactly that reason.

Think of the people you know. How many of them can compete on purely financial grounds? How many can compare to the chimps at the top of the financial heap without feeling like an utter failure, a miserable loser? Not many. I can’t think of anyone I know who wouldn’t be at least a little unhappy. Some of them are pretty well off, but it’s awfully hard to compare with billionaires in their department. There’s no way to prove that this version of subcultures is the right one (perhaps fragmenting the culture fragments the possible status), but when I look at simple models, this version seems plausible to me and to explain some deep trends like monogamy.

Subcultures set you free

If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself. Enjoy your achievements as well as your plans. Keep interested in your own career, however humble; it is a real possession in the changing fortunes of time.

Having a society in which an artist can mingle as social equals with the billionaire and admit the Nobel scientists and the philanthropist is fundamental to our mental health! If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand.

It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and a cat may look at a king.

Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.

Link: How to Be Gay

The first hint of trouble came in an e-mail message. It reached me on Friday, March 17, 2000, at 4:09 p.m. The message was from a guy named Jeff in Erie, Pa., who was otherwise unknown to me.

At first, I couldn’t figure out why Jeff was writing me. He kept referring to some college course, and he seemed to be very exercised over it. He wanted to know what it was really about. He went on sarcastically to suggest that I tell the executive committee of the English department to include in the curriculum, for balance, another course, entitled “How to Be a Heartless Conservative.”

It turned out that Jeff was not alone in his indignation. A dozen e-mail messages, most of them abusive and some of them obscene, followed in quick succession. The subsequent days and weeks brought many more.

Eventually, I realized that earlier on that Friday, the registrar’s office at the University of Michigan at Ann Arbor, where I teach English, had activated its course-information Web site, listing the classes to be offered during the fall term. At virtually the same moment, the Web site of the National Review had run a story called “How to Be Gay 101.” Except for the heading, the story consisted entirely of one page from Michigan’s newly published course listings.

So what was this story that was too good for the National Review, which had evidently been tipped off,to keep under wraps for a single day? It had to do with an undergraduate English course I had just invented called “How to Be Gay: Male Homosexuality and Initiation.”

The course examined how gay men acquire a conscious identity, a common culture, a particular outlook on the world, a distinctive sensibility. It was designed to explore a basic paradox: How do you become who you are? Or, as the course description put it: “Just because you happen to be a gay man doesn’t mean that you don’t have to learn how to become one.”

The course looked specifically at gay men’s appropriation and reuse of works from mainstream culture and their transformation of those works into vehicles of gay sensibility and gay meaning. The ultimate goal of such an inquiry was to shed light on the nature and formation of gay male subjectivity, and to provide a nonpsychological account of it, by approaching homosexuality as a social, not an individual, condition and as a cultural practice rather than a sexual one.

Those who study gay male culture encounter an initial, daunting obstacle: Some people don’t believe there is such a thing. Although the existence of gay male culture is routinely acknowledged as a fact, it is just as routinely denied as a truth.

That gay men have a specific attachment to certain cultural objects and forms is the widespread, unquestioned assumption behind a lot of American popular humor. No one will look at you aghast, or cry out in protest, or stop you in midsentence, if you dare to imply that a guy who worships divas, who loves torch songs or show tunes, who knows all Bette Davis’s best lines by heart, or who attaches supreme importance to fine points of style or interior design might, just possibly, not turn out to be completely straight.

When a satirical student newspaper at the University of Michigan wanted to mock the panic of one alumnus over the election of an openly gay student-body president, it wrote that the new president “has finally succeeded in his quest to turn Michigan’s entire student body homosexual.” Within minutes, the paper wrote, “European techno music began blaring throughout Central and North Campus.” A course in postmodern interior design became mandatory for freshmen, and “94 percent of the school’s curriculum now involves show tunes.”

This is the stuff of popular stereotype.

Perhaps for that very reason, if you assert with a straight face that there is such a thing as gay male culture, people will immediately object, citing a thousand different reasons why such a thing is impossible, or ridiculous, or offensive, and why anyone who says otherwise is deluded, completely out of date, morally suspect, and politically irresponsible. Which probably won’t stop the very people who make those objections from telling you a joke about gay men and show tunes—even with their next breath.

Happily, some large cracks have lately appeared in that fine line between casual acknowledgment and determined denial. At least since the success of such cable-television series as Queer Eye for the Straight Guy and RuPaul’s Drag Race, it has become commonplace to regard male homosexuality as comprising not only a set of specific sexual practices but also an assortment of characteristic social and cultural practices.

This flattering image of gay culture—of gayness as culture—is not entirely new, even if its entry into the stock of received ideas that make up the common sense of straight society is relatively recent. That gay men are particularly responsive to music and the arts was already a theme in the writings of psychiatrists and sexologists at the turn of the 20th century. In 1954 the psychoanalyst Carl Jung noted that gay men “may have good taste and an aesthetic sense.” By the late 1960s, the anthropologist Esther Newton could speak quite casually of “the widespread belief that homosexuals are especially sensitive to matters of aesthetics and refinement.”

Richard Florida, an economist and social theorist (as well as a self-confessed heterosexual), may have given that ancient suspicion a new, empirical foundation. In a series of sociological and statistical studies of what he has called the “creative class,” Florida argues that the presence of gay people in a locality is an excellent predictor of a viable high-tech industry and its potential for growth. The reason is that high-tech jobs nowadays follow the work force, and the new class of “creative” workers is composed of “nerds” and oddballs who gravitate to places with “low entry barriers to human capital.” Gay people, in this context, are the “canaries of the Creative Age.” They can flourish only in a pure atmosphere characterized by a high quotient of “lifestyle amenities,” coolness, “culture and fashion,” “vibrant street life,” and “a cutting-edge music scene.” And so the presence of gay people “in large numbers is an indicator of an underlying culture that’s open-minded and diverse—and thus conducive to creativity.”

All of which provides empirical confirmation, however flimsy, of the notion that homosexuality is not just a sexual orientation but a cultural orientation, a dedicated commitment to certain social or aesthetic values, an entire way of being.

That distinctively gay way of being, moreover, appears to be rooted in a particular queer way of feeling. And that queer way of feeling—that queer subjectivity—expresses itself through a peculiar, dissident way of relating to cultural objects (movies, songs, clothes, books, works of art) and cultural forms in general (art and architecture, opera and musical theater, pop and disco, style and fashion, emotion and language). As a cultural practice, male homosexuality involves a characteristic way of receiving, reinterpreting, and reusing mainstream culture. As a result, certain figures who are already prominent in the mass media become gay icons: They get taken up by gay men with a peculiar intensity that differs from their wider reception in the straight world. (That practice is so marked, and so widely acknowledged, that the National Portrait Gallery in London could organize an entire exhibition around the theme of Gay Icons in 2009.)

What this implies is that it is not enough for a man to be homosexual in order to be gay. Same-sex desire alone does not equal gayness. “Gay” refers not just to something you are, but also to something you do. Which means that you don’t have to be homosexual in order to do it. Gayness is not a state or condition. It’s a mode of perception, an attitude, an ethos: In short, it is a practice.

And if gayness is a practice, it is something you can do well or badly. In order to do it well, you may need to be shown how to do it by someone (gay or straight) who is already good at it and who can initiate you into it—by demonstrating to you, through example, how to practice it and by training you to do it right.

Rather than dismiss out of hand the outrageous idea that there is a right way to be gay, I want to try to understand what it means. For what it registers is a set of intuitions about the relation between sexuality and form. If we could discover in what that relation consists, we would be in a better position to grasp a fundamental element of our existence, which even feminists have been slow to analyze—namely, the sexual politics of cultural form.

Will gay men still have to learn how to be gay when gay liberation has done its work and they no longer feel excluded from heterosexual culture?

When homophobia is finally overcome, when it is a thing of the past, when gay people achieve equal rights, social recognition, and acceptance, when we are fully integrated into straight society—when all that comes to pass, will it spell the end of gay culture, or gay subculture, as we know it?

That is indeed what people like Daniel Harris, the author of The Rise and Fall of Gay Culture, and the journalist Andrew Sullivan, who wrote the much-discussed essay “The End of Gay Culture,” have argued. I dispute their assertions, but perhaps their prognostications are not wrong, only premature. Perhaps the day is coming when more favorable social conditions will vindicate their claims.

People have wondered, after all, whether Ralph Ellison’s Invisible Man, James Baldwin’s Another Country, or Harper Lee’s To Kill a Mockingbird would become incomprehensible or meaningless if there ever came a time when race ceased to be socially marked in American society. Similarly, would the humor of Lenny Bruce or Woody Allen lose its ability to make us laugh when or if Jews become thoroughly assimilated? Isn’t that humor already starting to look a bit archaic?

Gay culture’s apparent decline actually stems from structural causes that have little to do with the growing social acceptance of homosexuality. There has been a huge transformation in the material base of gay life in the United States, and in metropolitan centers elsewhere, during the past three decades. That transformation has had a profound impact on the shape of gay life and culture. It is the result of three large-scale developments: the recapitalization of the inner city and the resulting gentrification of urban neighborhoods; the epidemic of AIDS; and the invention of the Internet.


Orientalism by Edward W. Said
In this highly acclaimed work, Edward Said surveys the history and nature of Western attitudes towards the East, considering Orientalism as a powerful European ideological creation – a way for writers, philosophers and colonial administrators to deal with the ‘otherness’ of Eastern culture, customs and beliefs. He traces this view through the writings of Homer, Nerval and Flaubert, Disraeli and Kipling, whose imaginative depictions have greatly contributed to the West’s romantic and exotic picture of the Orient. In his new preface, Said examines the effect of continuing Western imperialism after recent events in Palestine, Afghanistan and Iraq.

Orientalism by Edward Said is a cononical text of cultural studies in which he has challenged the concept of orientalism or the difference between east and west, as he puts it. He says that with the start of European colonization the Europeans came in contact with the lesser developed countries of the east. They found their civilization and culture very exotic, and established the science of orientalism, which was the study of the orientals or the people from these exotic civilization.
Edward Said argues that the Europeans divided the world into two parts; the east and the west or the occident and the orient or the civilized and the uncivilized. This was totally an artificial boundary; and it was laid on the basis of the concept of them and us or theirs and ours. The Europeans used orientalism to define themselves. Some particular attributes were associated with the orientals, and whatever the orientals weren’t the occidents were. The Europeans defined themselves as the superior race compared to the orientals; and they justified their colonization by this concept. They said that it was their duty towards the world to civilize the uncivilized world. The main problem, however, arose when the Europeans started generalizing the attributes they associated with orientals, and started portraying these artificial characteristics associated with orientals in their western world through their scientific reports, literary work, and other media sources. What happened was that it created a certain image about the orientals in the European mind and in doing that infused a bias in the European attitude towards the orientals. This prejudice was also found in the orientalists (scientist studying the orientals); and all their scientific research and reports were under the influence of this. The generalized attributes associated with the orientals can be seen even today, for example, the Arabs are defined as uncivilized people; and Islam is seen as religion of the terrorist.
"So far as the United States seems to be concerned, it is only a slight overstatement to say that Muslims and Arabs are essentially seen as either oil suppliers or potential terrorists. Very little of the detail, the human density, the passion of Arab-Moslem life has entered the awareness of even those people whose profession it is to report the Arab world. What we have instead is a series of crude, essentialized caricatures of the Islamic world presented in such a way as to make that world vulnerable to military aggression." —Edward W. Said

Orientalism by Edward W. Said

In this highly acclaimed work, Edward Said surveys the history and nature of Western attitudes towards the East, considering Orientalism as a powerful European ideological creation – a way for writers, philosophers and colonial administrators to deal with the ‘otherness’ of Eastern culture, customs and beliefs. He traces this view through the writings of Homer, Nerval and Flaubert, Disraeli and Kipling, whose imaginative depictions have greatly contributed to the West’s romantic and exotic picture of the Orient. In his new preface, Said examines the effect of continuing Western imperialism after recent events in Palestine, Afghanistan and Iraq.

Orientalism by Edward Said is a cononical text of cultural studies in which he has challenged the concept of orientalism or the difference between east and west, as he puts it. He says that with the start of European colonization the Europeans came in contact with the lesser developed countries of the east. They found their civilization and culture very exotic, and established the science of orientalism, which was the study of the orientals or the people from these exotic civilization.

Edward Said argues that the Europeans divided the world into two parts; the east and the west or the occident and the orient or the civilized and the uncivilized. This was totally an artificial boundary; and it was laid on the basis of the concept of them and us or theirs and ours. The Europeans used orientalism to define themselves. Some particular attributes were associated with the orientals, and whatever the orientals weren’t the occidents were. The Europeans defined themselves as the superior race compared to the orientals; and they justified their colonization by this concept. They said that it was their duty towards the world to civilize the uncivilized world. The main problem, however, arose when the Europeans started generalizing the attributes they associated with orientals, and started portraying these artificial characteristics associated with orientals in their western world through their scientific reports, literary work, and other media sources. What happened was that it created a certain image about the orientals in the European mind and in doing that infused a bias in the European attitude towards the orientals. This prejudice was also found in the orientalists (scientist studying the orientals); and all their scientific research and reports were under the influence of this. The generalized attributes associated with the orientals can be seen even today, for example, the Arabs are defined as uncivilized people; and Islam is seen as religion of the terrorist.

"So far as the United States seems to be concerned, it is only a slight overstatement to say that Muslims and Arabs are essentially seen as either oil suppliers or potential terrorists. Very little of the detail, the human density, the passion of Arab-Moslem life has entered the awareness of even those people whose profession it is to report the Arab world. What we have instead is a series of crude, essentialized caricatures of the Islamic world presented in such a way as to make that world vulnerable to military aggression." —Edward W. Said