Sunshine Recorder

Link: Cigarettes and Climate Change

I am a smoker, and I am in denial. It isn’t that I don’t believe that cigarettes will kill me. I do. It isn’t that I don’t believe that I’m addicted. I know I am. Like most addicts, my denial takes the form of dissonance: I rationalize, I procrastinate, I make token gestures and shop for comparisons. Distraction is easy: I read while I smoke. Anything to avoid looking that monster in the eyes.

These are not novel forms of coping. Among more private kinds of existential crises—the junkie, the smoker, the troubling lump beneath the skin, and the marriage on the brink—denial is rarely outright. You know you have a problem; the trick is in refusing to acknowledge it.

It’s strange, then, that in the case of climate change—a cognitively torturous existential threat exceeding the sum of all our private ones by some incomprehensible order of magnitude—we tell an uncomplicated story about two stark sides. On one hand are the scientists; on the other, the skeptics. The skeptics don’t believe the monster’s there. The scientists (and activists, and journalists) endeavor to persuade them. When this latter side succeeds, the story goes, we will finally take action. In the meantime, we sit and hope that day won’t be too late.

That story isn’t true.

In American political life desire is rarely synonymous with will. If mere consensus made it so, then today we might count single-payer healthcare, the Equal Rights Amendment, and a guaranteed federal minimum wage among our national accomplishments. Each, at one time in our history, had the tacit approval of the majority. The reasons for their failure are complex and varied, but the consistent lesson is that tepid support, no matter how broad, does not change policy. Only the concerted efforts of a well-organized advocacy do. When measures pass, it is because an active constituency has engineered their victory, regardless of how many or how few citizens were basically okay with the idea. So it has been, on the left and on the right, from the American Revolution to the death of campaign finance laws.

An exhaustive conversion of the skeptics is not what stands between The United States and climate change reform. This is a good thing. If it were, then we’d be wiser to surrender now and enjoy the planet while it lasts us. But despite their stubborn numbers and friends in well-financed places, the Ted Cruzes of the world lack the power to long block meaningful reform. Our inaction these last decades is not a consequence of their resistance, but rather of the absence of sufficient pressure from those of us in the reality-based community, engaged in our more insidious forms of denial.

We are the problem. Those of us who, when confronted with the existential dread posed by global warming, do not deny the presence of the monster, but do everything within our power not to look it in the eyes.

It’s nothing to be ashamed of. Climate change—like addiction, like illness, like trauma and turmoil—is a threat to our sanity; a train of thought so stressful that the psychology of coping can’t help kicking in. It’s just too horrible to focus on for long, and so we do what we have always done in the face of crippling terror. We deny—not by rejecting the threat, but by avoiding it. Sometimes this takes the form of minimizing the threat, hoping that climate change plays out like only the mildest of our models’ projections. Sometimes we try wishful thinking, maintaining undue faith in some miraculous hi-tech solution. Most often, we just settle on escapism: thinking, reading, caring, and arguing about anything else. Anything that feels easier to tackle, anything that won’t kill us if we don’t. We’ll quit smoking next year, next year, next year.

I don’t have a grand solution for this dissonance, much less one for the multitude of international challenges that would face even the most devoted effort to keep our climate at bay. But I do have one small suggestion. Like all addicts in denial, we have our friendly enablers. Chief among these is the political press.

I don’t mean doctrinaire reactionary rags. I mean the mainstream and leftist publications, erstwhile environmentalists who would never dream of engaging in overtly skeptical denial. The Atlantic, The New Republic, and the New York Times all have robust environmental sections. But this, in a way, is the problem; they consign any mention of climate change to a clearly labeled box—which is a great help to those of us who are looking to avoid actively contemplating a terrifying truth. Meanwhile, their other sections, without malice or intention, become complicit in our denial. They publish stories about the future, about technology and medicine and politics, without any mention of a warming globe. “Researchers believe that in a hundred years . . .”; “By mid-century, the electoral map might . . . .” We all know how these stories go. I’ve even written some of them.

This futurism enables our denial. Like a Norman Rockwell painting that invites its audience into a shared fantasy about the past, these stories solicit a shared fantasy of the future–one where interesting possibilities of population, medicine, technology, and politics exist without the horrifying context of civilizational collapse. These stories fail to mention that quantum computing will be more difficult to research without energy. Many populations will be irrevocably impacted by famine. We can expect the long-term voting trends of Florida to change when half of Miami is underwater. And yet we’re only made to think about these things when we choose to read the “Green” sections of our newspapers and magazines.

I’m not suggesting that we cease to write stories about the future. But I am suggesting this: as a matter of political responsibility, magazines and newspapers should adopt a provision of their style guide requiring that any claim which is dependent on the continuity of present civilization be followed by an asterisk. At the bottom of the page, I propose something simple: “Assuming green house gases are controlled,” or “Contingent on a solution to climate change.”

Intervention requires that we close off the escape routes from our dread. We must be made to look the monster in the eyes, and do so every day. It will be unbearable at first; in self-defense, we might even find it obnoxious. But perhaps it would serve to nudge us just enough, to make us think about the problem until we do some thing about it. Then we could go back to such stories, confident that their contingencies won’t be spoiled by the rising tide.

Link: Imagining the Post-Antibiotics Future

After 85 years, antibiotics are growing impotent. So what will medicine, agriculture and everyday life look like if we lose these drugs entirely?

Predictions that we might sacrifice the antibiotic miracle have been around almost as long as the drugs themselves. Penicillin was first discovered in 1928 and battlefield casualties got the first non-experimental doses in 1943, quickly saving soldiers who had been close to death. But just two years later, the drug’s discoverer Sir Alexander Fleming warned that its benefit might not last. Accepting the 1945 Nobel Prize in Medicine, he said:

“It is not difficult to make microbes resistant to penicillin in the laboratory by exposing them to concentrations not sufficient to kill them… There is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drug make them resistant.”

As a biologist, Fleming knew that evolution was inevitable: sooner or later, bacteria would develop defenses against the compounds the nascent pharmaceutical industry was aiming at them. But what worried him was the possibility that misuse would speed the process up. Every inappropriate prescription and insufficient dose given in medicine would kill weak bacteria but let the strong survive. (As would the micro-dose “growth promoters” given in agriculture, which were invented a few years after Fleming spoke.) Bacteria can produce another generation in as little as twenty minutes; with tens of thousands of generations a year working out survival strategies, the organisms would soon overwhelm the potent new drugs.

Fleming’s prediction was correct. Penicillin-resistant staph emerged in 1940, while the drug was still being given to only a few patients. Tetracycline was introduced in 1950, and tetracycline-resistant Shigella emerged in 1959; erythromycin came on the market in 1953, and erythromycin-resistant strep appeared in 1968. As antibiotics became more affordable and their use increased, bacteria developed defenses more quickly. Methicillin arrived in 1960 and methicillin resistance in 1962; levofloxacin in 1996 and the first resistant cases the same year; linezolid in 2000 and resistance to it in 2001; daptomycin in 2003 and the first signs of resistance in 2004.With antibiotics losing usefulness so quickly — and thus not making back the estimated $1 billion per drug it costs to create them — the pharmaceutical industry lost enthusiasm for making more. In 2004, there were only five new antibiotics in development, compared to more than 500 chronic-disease drugs for which resistance is not an issue — and which, unlike antibiotics, are taken for years, not days. Since then, resistant bugs have grown more numerous and by sharing DNA with each other, have become even tougher to treat with the few drugs that remain. In 2009, and again this year, researchers in Europe and the United States sounded the alarm over an ominous form of resistance known as CRE, for which only one antibiotic still works.

Health authorities have struggled to convince the public that this is a crisis. In September, Dr. Thomas Frieden, the director of the U.S. Centers for Disease Control and Prevention, issued a blunt warning: “If we’re not careful, we will soon be in a post-antibiotic era. For some patients and some microbes, we are already there.” The chief medical officer of the United Kingdom, Dame Sally Davies — who calls antibiotic resistance as serious a threat as terrorism — recently published a book in which she imagines what might come next. She sketches a world where infection is so dangerous that anyone with even minor symptoms would be locked in confinement until they recover or die. It is a dark vision, meant to disturb. But it may actually underplay what the loss of antibiotics would mean.

In 2009, three New York physicians cared for a sixty-seven-year-old man who had major surgery and then picked up a hospital infection that was “pan-resistant” — that is, responsive to no antibiotics at all. He died fourteen days later. When his doctors related his case in a medical journal months afterward, they still sounded stunned. “It is a rarity for a physician in the developed world to have a patient die of an overwhelming infection for which there are no therapeutic options,” they said, calling the man’s death “the first instance in our clinical experience in which we had no effective treatment to offer.”

They are not the only doctors to endure that lack of options. Dr. Brad Spellberg of UCLA’s David Geffen School of Medicine became so enraged by the ineffectiveness of antibiotics that he wrote a book about it.

“Sitting with a family, trying to explain that you have nothing left to treat their dying relative — that leaves an indelible mark on you,” he says. “This is not cancer; it’s infectious disease, treatable for decades.”

As grim as they are, in-hospital deaths from resistant infections are easy to rationalize: perhaps these people were just old, already ill, different somehow from the rest of us. But deaths like this are changing medicine. To protect their own facilities, hospitals already flag incoming patients who might carry untreatable bacteria. Most of those patients come from nursing homes and “long-term acute care” (an intensive-care alternative where someone who needs a ventilator for weeks or months might stay). So many patients in those institutions carry highly resistant bacteria that hospital workers isolate them when they arrive, and fret about the danger they pose to others. As infections become yet more dangerous, the healthcare industry will be even less willing to take such risks.

Those calculations of risk extend far beyond admitting possibly contaminated patients from a nursing home. Without the protection offered by antibiotics, entire categories of medical practice would be rethought.

Many treatments require suppressing the immune system, to help destroy cancer or to keep a transplanted organ viable. That suppression makes people unusually vulnerable to infection. Antibiotics reduce the threat; without them, chemotherapy or radiation treatment would be as dangerous as the cancers they seek to cure. Dr. Michael Bell, who leads an infection-prevention division at the CDC, told me: “We deal with that risk now by loading people up with broad-spectrum antibiotics, sometimes for weeks at a stretch. But if you can’t do that, the decision to treat somebody takes on a different ethical tone. Similarly with transplantation. And severe burns are hugely susceptible to infection. Burn units would have a very, very difficult task keeping people alive.”

Doctors routinely perform procedures that carry an extraordinary infection risk unless antibiotics are used. Chief among them: any treatment that requires the construction of portals into the bloodstream and gives bacteria a direct route to the heart or brain. That rules out intensive-care medicine, with its ventilators, catheters, and ports—but also something as prosaic as kidney dialysis, which mechanically filters the blood.

Next to go: surgery, especially on sites that harbor large populations of bacteria such as the intestines and the urinary tract. Those bacteria are benign in their regular homes in the body, but introduce them into the blood, as surgery can, and infections are practically guaranteed. And then implantable devices, because bacteria can form sticky films of infection on the devices’ surfaces that can be broken down only by antibiotics

Dr. Donald Fry, a member of the American College of Surgeons who finished medical school in 1972, says: “In my professional life, it has been breathtaking to watch what can be done with synthetic prosthetic materials: joints, vessels, heart valves. But in these operations, infection is a catastrophe.” British health economists with similar concerns recently calculated the costs of antibiotic resistance. To examine how it would affect surgery, they picked hip replacements, a common procedure in once-athletic Baby Boomers. They estimated that without antibiotics, one out of every six recipients of new hip joints would die.

Antibiotics are administered prophylactically before operations as major as open-heart surgery and as routine as Caesarean sections and prostate biopsies. Without the drugs, the risks posed by those operations, and the likelihood that physicians would perform them, will change.

“In our current malpractice environment, is a doctor going to want to do a bone marrow transplant, knowing there’s a very high rate of infection that you won’t be able to treat?” asks Dr. Louis Rice, chair of the department of medicine at Brown University’s medical school. “Plus, right now healthcare is a reasonably free-market, fee-for-service system; people are interested in doing procedures because they make money. But five or ten years from now, we’ll probably be in an environment where we get a flat sum of money to take care of patients. And we may decide that some of these procedures aren’t worth the risk.”

Link: Four Futures

In his speech to the Occupy Wall Street encampment at Zuccotti Park, Slavoj Žižek lamented that “It’s easy to imagine the end of the world, but we cannot imagine the end of capitalism.” It’s a paraphrase of a remark that Fredric Jameson made some years ago, when the hegemony of neoliberalism still appeared absolute. Yet the very existence of Occupy Wall Street suggests that the end of capitalism has become a bit easier to imagine of late. At first, this imagining took a mostly grim and dystopian form: at the height of the financial crisis, with the global economy seemingly in full collapse, the end of capitalism looked like it might be the beginning of a period of anarchic violence and misery. And still it might, with the Eurozone teetering on the edge of collapse as I write. But more recently, the spread of global protest from Cairo to Madrid to Madison to Wall Street has given the Left some reason to timidly raise its hopes for a better future after capitalism.

One thing we can be certain of is that capitalism will end. Maybe not soon, but probably before too long; humanity has never before managed to craft an eternal social system, after all, and capitalism is a notably more precarious and volatile order than most of those that preceded it. The question, then, is what will come next. Rosa Luxemburg, reacting to the beginnings of World War I, cited a line from Engels: “Bourgeois society stands at the crossroads, either transition to socialism or regression into barbarism.” In that spirit I offer a thought experiment, an attempt to make sense of our possible futures. These are a few of the socialisms we may reach if a resurgent Left is successful, and the barbarisms we may be consigned to if we fail.

Much of the literature on post-capitalist economies is preoccupied with the problem of managing labor in the absence of capitalist bosses. However, I will begin by assuming that problem away, in order to better illuminate other aspects of the issue. This can be done simply by extrapolating capitalism’s tendency toward ever-increasing automation, which makes production ever-more efficient while simultaneously challenging the system’s ability to create jobs, and therefore to sustain demand for what is produced. This theme has been resurgent of late in bourgeois thought: in September 2011, Slate’s Farhad Manjoo wrote a long series on “The Robot Invasion,” and shortly thereafter two MIT economists published Race Against the Machine, an e-book in which they argued that automation was rapidly overtaking many of the areas that until recently served as the capitalist economy’s biggest motors of job creation. From fully automatic car factories to computers that can diagnose medical conditions, robotization is overtaking not only manufacturing, but much of the service sector as well.

Taken to its logical extreme, this dynamic brings us to the point where the economy does not require human labor at all. This does not automatically bring about the end of work or of wage labor, as has been falsely predicted over and over in response to new technological developments. But it does mean that human societies will increasingly face the possibility of freeing people from involuntary labor. Whether we take that opportunity, and how we do so, will depend on two major factors, one material and one social. The first question is resource scarcity: the ability to find cheap sources of energy, to extract or recycle raw materials, and generally to depend on the Earth’s capacity to provide a high material standard of living to all. A society that has both labor-replacing technology and abundant resources can overcome scarcity in a thoroughgoing way that a society with only the first element cannot. The second question is political: what kind of society will we be? One in which all people are treated as free and equal beings, with an equal right to share in society’s wealth? Or a hierarchical order in which an elite dominates and controls the masses and their access to social resources?

There are therefore four logical combinations of the two oppositions, resource abundance vs. scarcity and egalitarianism vs. hierarchy. To put things in somewhat vulgar-Marxist terms, the first axis dictates the economic base of the post-capitalist future, while the second pertains to the socio-political superstructure. Two possible futures are socialisms (only one of which I will actually call by that name) while the other two are contrasting flavors of barbarism.

1. Egalitarianism and abundance: communism

2. Hierarchy and abundance: rentism

3. Egalitarianism and scarcity: socialism

4. Hierarchy and scarcity: exterminism

Read more.

Link: Kurt Vonnegut: Ladies & Gentlemen of A.D. 2088

Back in 1988, as part of an ad campaign to be printed in Time magazine, Volkswagen approached a number of notable thinkers and asked them to write a letter to the future—some words of advice to those living in 2088, to be precise. Many agreed, including novelist Kurt Vonnegut; his letter can be read below.

Ladies & Gentlemen of A.D. 2088:

It has been suggested that you might welcome words of wisdom from the past, and that several of us in the twentieth century should send you some. Do you know this advice from Polonius in Shakespeare’s Hamlet: ‘This above all: to thine own self be true’? Or what about these instructions from St. John the Divine: ‘Fear God, and give glory to Him; for the hour of His judgment has come’? The best advice from my own era for you or for just about anybody anytime, I guess, is a prayer first used by alcoholics who hoped to never take a drink again: ‘God grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference.’

Our century hasn’t been as free with words of wisdom as some others, I think, because we were the first to get reliable information about the human situation: how many of us there were, how much food we could raise or gather, how fast we were reproducing, what made us sick, what made us die, how much damage we were doing to the air and water and topsoil on which most life forms depended, how violent and heartless nature can be, and on and on. Who could wax wise with so much bad news pouring in?

For me, the most paralyzing news was that Nature was no conservationist. It needed no help from us in taking the planet apart and putting it back together some different way, not necessarily improving it from the viewpoint of living things. It set fire to forests with lightning bolts. It paved vast tracts of arable land with lava, which could no more support life than big-city parking lots. It had in the past sent glaciers down from the North Pole to grind up major portions of Asia, Europe, and North America. Nor was there any reason to think that it wouldn’t do that again someday. At this very moment it is turning African farms to deserts, and can be expected to heave up tidal waves or shower down white-hot boulders from outer space at any time. It has not only exterminated exquisitely evolved species in a twinkling, but drained oceans and drowned continents as well. If people think Nature is their friend, then they sure don’t need an enemy.

Yes, and as you people a hundred years from now must know full well, and as your grandchildren will know even better: Nature is ruthless when it comes to matching the quantity of life in any given place at any given time to the quantity of nourishment available. So what have you and Nature done about overpopulation? Back here in 1988, we were seeing ourselves as a new sort of glacier, warm-blooded and clever, unstoppable, about to gobble up everything and then make love—and then double in size again.

On second thought, I am not sure I could bear to hear what you and Nature may have done about too many people for too small a food supply.

And here is a crazy idea I would like to try on you: Is it possible that we aimed rockets with hydrogen bomb warheads at each other, all set to go, in order to take our minds off the deeper problem—how cruelly Nature can be expected to treat us, Nature being Nature, in the by-and-by?

Now that we can discuss the mess we are in with some precision, I hope you have stopped choosing abysmally ignorant optimists for positions of leadership. They were useful only so long as nobody had a clue as to what was really going on—during the past seven million years or so. In my time they have been catastrophic as heads of sophisticated institutions with real work to do.

The sort of leaders we need now are not those who promise ultimate victory over Nature through perseverance in living as we do right now, but those with the courage and intelligence to present to the world what appears to be Nature’s stern but reasonable surrender terms:
  1. Reduce and stabilize your population.
  2. Stop poisoning the air, the water, and the topsoil.
  3. Stop preparing for war and start dealing with your real problems.
  4. Teach your kids, and yourselves, too, while you’re at it, how to inhabit a small planet without helping to kill it.
  5. Stop thinking science can fix anything if you give it a trillion dollars.
  6. Stop thinking your grandchildren will be OK no matter how wasteful or destructive you may be, since they can go to a nice new planet on a spaceship. That is really mean, and stupid.
  7. And so on. Or else.
Am I too pessimistic about life a hundred years from now? Maybe I have spent too much time with scientists and not enough time with speechwriters for politicians. For all I know, even bag ladies and bag gentlemen will have their own personal helicopters or rocket belts in A.D. 2088. Nobody will have to leave home to go to work or school, or even stop watching television. Everybody will sit around all day punching the keys of computer terminals connected to everything there is, and sip orange drink through straws like the astronauts.


Kurt Vonnegut

Link: The Comforts of the Apocalypse

"Call it dystopian narcissism: the conviction that our anxieties are uniquely awful; that the crises of our age will be the ones that finally do civilization in; that we are privileged to witness the beginning of the end."

Nineteen days after the world failed to end, blood stopped flowing to the brain of Harold Camping, prophet of doom. Had he felt his stroke coming as he confidently forecast apocalypse? Maybe not; maybe he had no more foresight into his own demise than the demise of the world. Or maybe he had simply confused the two—after all, he was approaching his 90th birthday, and his own mortality couldn’t have seemed far off when, on national billboards and his own radio network, he set a date (May 21, 2011) for the end of days. For some, it is a short mental step from “my end is imminent” to “the end of everything is imminent.” Call it apocalyptic narcissism.

We flatter ourselves when we imagine a world incapable of lasting without us in it—a world that, having ceased to exist, cannot forget us, discard us, or pave over our graves. Even if the earth no longer sits at the center of creation, we can persuade ourselves that our life spans sit at the center of time, that our age and no other is history’s fulcrum. “We live in the most interesting times in human history … the days of fulfillment,” writes the Rev. E.W. Jackson, Republican candidate for lieutenant governor of Virginia, in words that could have also come from the mouth of Saint Paul or Shabbetai Zevi or Hal Lindsey or any other visionary unable to accept the hard truth of the apocalyptic lottery: We’re virtually guaranteed to witness the end of nothing except our lives, and the present, far from fulfilling anything, is mainly distinguished by being the one piece of time with us in it.

Perhaps you, like me, are a good secularist, and perhaps Camping’s prophecies strike you as a perverse joke. (You may also be relieved to hear his stroke proved nonfatal.) But I find it harder to mock false prophets, because of the very real fear (of death, nothingness, irrelevance) to which their prophecies speak, and because I’m not at all convinced that secular culture is above their form of self-flattery. We’re living through a dystopia boom; secular apocalypses have, in the words of The New York Times, “pretty much owned” best-seller lists and taken on a dominant role in pop culture. These are fictions of infinite extrapolation, stories in which today’s source of anxiety becomes tomorrow’s source of collapse.

Suzanne Collins’s The Hunger Games projects reality television and social stratification into a televised tournament of death. Scott Westerfeld’s Uglies series manages to combine an energy crisis, an omnipresent surveillance state, and caste warfare between “uglies” and surgically enhanced “pretties.” Nor is the literature of collapse confined to the young-adult section. The World Without Us, Alan Weisman’s 2007 best seller, imagines in loving detail the decay of material civilization on an earth from which humans have vanished. Our extinction goes unexplained, but a sense of environmental catastrophe hangs heavy over the book; billing itself as nonfiction, its premise comes straight from dystopian sci-fi.

All of this literature is the product of what the philosopher John Gray has described as “a culture transfixed by the spectacle of its own fragility.” Call it dystopian narcissism: the conviction that our anxieties are uniquely awful; that the crises of our age will be the ones that finally do civilization in; that we are privileged to witness the beginning of the end.

Of course, today’s dystopian writers didn’t invent the ills they decry: Our wounds are real. But there is also a neurotic way of picking at a wound, of catastrophizing, of visualizing the day the wounded limb turns gangrenous and falls off. It’s this hunger for crisis, the need to assign our problems world-transforming import, that separates dystopian narcissism from constructive polemic. And this hunger, too, has its origins in a religious impulse, in particular, the impulse called “typology.”

Typology was originally a method of reading the Old Testament in the light of the New. More broadly, typology speaks to the sense in which past events prefigure the present, or the present finds fulfillment in the future. Ordinary historical thinking tells us to look backward to understand the present; typological thinking tells us to make sense of the present in light of the promised future. The events of past and present are revealed in their true form only when our faith reverses the flow of history. As the saying goes, “in the Old Testament the New Testament is concealed; in the New Testament the Old Testament is revealed.” Against the dictates of common sense, the past is seen to be the future’s blurred, less-authentic “copy.” So Adam is a type of Christ, the Flood is a type of baptism, and the binding of Isaac prefigures the Crucifixion, as Israel prefigures the Church. This meaning lives on in “typing” and “typesetting”; the words you read on a printed page are the ghostly impressions of a real, three-dimensional piece of iron somewhere.

Typology would be a theological relic were it simply a means of reading Scriptures. But as the literary critic Northrop Frye wrote, it is a far-reaching “mode of thought,” built on the “assumption that there is some meaning and point to history, and that sooner or later some event or events will occur which will indicate what that meaning or point is … that despite apparent confusion, even chaos, in human events, nevertheless those events are going somewhere and indicating something.”

Needless to say, this mode of thought is deeply appealing and deeply consoling. The critic Erich Auerbach argued that typological thinking helped set the course of Western literature: The possibility that seemingly trivial events might represent or prefigure the divine invested the struggles of ordinary men and women with new dignity. Think of how a mundane walk down the street can be transformed into a scene of high drama with the addition of earbuds and the soundtrack of your choice. Typological thinking does much the same thing to history, bringing order and import out of randomness.

That is just what happens, on the grandest possible scale, in apocalypse—literally, “the uncovering.” The destruction of history, and the unveiling of its purpose, happens at one stroke. Our culture’s apocalyptic stories, not least the Book of Revelation, resonate in part because they promise uncovered meaning. The madness of Revelation—its seven-headed Beast, its Whore of Babylon, and its celestial wedding feast in the New Jerusalem—has perhaps struck so many millions as a higher sanity because it speaks to the conviction that our own small victories and losses have a meaning that is eternal and profound. “Anyone coming ‘cold’ to the Book of Revelation, without context of any kind, would probably regard it as simply an insane rhapsody,” writes Frye. “And yet, if we were to explore below the repressions in our own minds that keep us ‘normal,’ we might find very similar nightmares of anxiety and triumph.”

The source of these nightmares must be old and deep—too old and deep to be damned up by mere secularism. To a surprising extent, our secular stories of dystopia and collapse rehearse the old story of apocalypse. We own a slate of anxieties that would have been unimaginable to older generations with fears of their own; but much of our literature of collapse suggests that the future will fear exactly what we fear, only in exaggerated form. In this way, our anxieties are exalted. Yesterday’s fears were foolish—but today’s are existential. And today’s threats are revealed to be not some problems, but the problems. Dystopias can satisfy the typological urge to invest our own slice of history with ultimate meaning: We look back from an imagined future to discover that we are correct in our fears, that our problems are special because they will be the ones to destroy us.

Link: Humanity's Deep Future

When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?  

… As the oldest university in the English-speaking world, Oxford is a strange choice to host a futuristic think tank, a salon where the concepts of science fiction are debated in earnest. The Future of Humanity Institute seems like a better fit for Silicon Valley or Shanghai. During the week that I spent with him, Bostrom and I walked most of Oxford’s small cobblestone grid. On foot, the city unfolds as a blur of yellow sandstone, topped by grey skies and gothic spires, some of which have stood for nearly 1,000 years. There are occasional splashes of green, open gates that peek into lush courtyards, but otherwise the aesthetic is gloomy and ancient. When I asked Bostrom about Oxford’s unique ambience, he shrugged, as though habit had inured him to it. But he did once tell me that the city’s gloom is perfect for thinking dark thoughts over hot tea.

Bostrom isn’t too concerned about extinction risks from nature. Not even cosmic risks worry him much, which is surprising, because our starry universe is a dangerous place. Every 50 years or so, one of the Milky Way’s stars explodes into a supernova, its detonation the latest gong note in the drumbeat of deep time. If one of our local stars were to go supernova, it could irradiate Earth, or blow away its thin, life-sustaining atmosphere. Worse still, a passerby star could swing too close to the Sun, and slingshot its planets into frigid, intergalactic space. Lucky for us, the Sun is well-placed to avoid these catastrophes. Its orbit threads through the sparse galactic suburbs, far from the dense core of the Milky Way, where the air is thick with the shrapnel of exploding stars. None of our neighbours look likely to blow before the Sun swallows Earth in four billion years. And, so far as we can tell, no planet-stripping stars lie in our orbital path. Our solar system sits in an enviable bubble of space and time.

But as the dinosaurs discovered, our solar system has its own dangers, like the giant space rocks that spin all around it, splitting off moons and scarring surfaces with craters. In her youth, Earth suffered a series of brutal bombardments and celestial collisions, but she is safer now. There are far fewer asteroids flying through her orbit than in epochs past. And she has sprouted a radical new form of planetary protection, a species of night watchmen that track asteroids with telescopes.

‘If we detect a large object that’s on a collision course with Earth, we would likely launch an all-out Manhattan project to deflect it,’ Bostrom told me. Nuclear weapons were once our asteroid-deflecting technology of choice, but not anymore. A nuclear detonation might scatter an asteroid into a radioactive rain of gravel, a shotgun blast headed straight for Earth. Fortunately, there are other ideas afoot. Some would orbit dangerous asteroids with small satellites, in order to drag them into friendlier trajectories. Others would paint asteroids white, so the Sun’s photons bounce off them more forcefully, subtly pushing them off course. Who knows what clever tricks of celestial mechanics would emerge if Earth were truly in peril.

Even if we can shield Earth from impacts, we can’t rid her surface of supervolcanoes, the crustal blowholes that seem bent on venting hellfire every 100,000 years. Our species has already survived a close brush with these magma-vomiting monsters. Some 70,000 years ago, the Toba supereruption loosed a small ocean of ash into the atmosphere above Indonesia. The resulting global chill triggered a food chain disruption so violent that it reduced the human population to a few thousand breeding pairs — the Adams and Eves of modern humanity. Today’s hyper-specialised, tech-dependent civilisations might be more vulnerable to catastrophes than the hunter-gatherers who survived Toba. But we moderns are also more populous and geographically diverse. It would take sterner stuff than a supervolcano to wipe us out.

‘There is a concern that civilisations might need a certain amount of easily accessible energy to ramp up,’ Bostrom told me. ‘By racing through Earth’s hydrocarbons, we might be depleting our planet’s civilisation startup-kit. But, even if it took us 100,000 years to bounce back, that would be a brief pause on cosmic time scales.’

It might not take that long. The history of our species demonstrates that small groups of humans can multiply rapidly, spreading over enormous volumes of territory in quick, colonising spasms. There is research suggesting that both the Polynesian archipelago and the New World — each a forbidding frontier in its own way — were settled by less than 100 human beings.

The risks that keep Bostrom up at night are those for which there are no geological case studies, and no human track record of survival. These risks arise from human technology, a force capable of introducing entirely new phenomena into the world.

Link: Deception Is Futile When Big Brother's Lie Detector Turns Its Eyes on You

Since September 11, 2001, federal agencies have spent millions of dollars on research designed to detect deceptive behavior in travelers passing through US airports and border crossings in the hope of catching terrorists. Security personnel have been trained—and technology has been devised—to identify, as an air transport trade association representative once put it, “bad people and not just bad objects.” Yet for all this investment and the decades of research that preceded it, researchers continue to struggle with a profound scientific question: How can you tell if someone is lying?

That problem is so complex that no one, including the engineers and psychologists developing machines to do it, can be certain if any technology will work. “It fits with our notion of justice, somehow, that liars can’t really get away with it,” says Maria Hartwig, a social psychologist at John Jay College of Criminal Justice who cowrote a recent report on deceit detection at airports and border crossings. The problem is, as Hartwig explains it, that all the science says people are really good at lying, and it’s incredibly hard to tell when we’re doing it.

In fact, most of us lie constantly—ranging from outright cons to minor fibs told to make life run more smoothly. “Some of the best research I’ve seen says we lie as much as 10 times every 24 hours,” says Phil Houston, a soft-spoken former CIA interrogator who is now CEO of QVerity, a company selling lie-detecting techniques in the business world. “There’s some research on college students that says it may be double and triple that. We lie a ton.” And yet, statistically, people can tell whether someone is telling the truth only around 54 percent of the time, barely better than a coin toss.

For thousands of years, attempts to detect deceit have relied on the notion that liars’ bodies betray them. But even after a century of scientific research, this fundamental assumption has never been definitively proven. “We know very little about deception from either a psychological or physiological view at the basic level,” says Charles Honts, a former Department of Defense polygrapher and now a Boise State University psychologist specializing in the study of deception. “If you look at the lie-detection literature, there’s nothing that ties it together, because there’s no basic theory there. It’s all over the place.”

Despite their fixation on the problem of deceit, government agencies aren’t interested in funding anything so abstract as basic research. “They want to buy hardware,” Honts says. But without an understanding of the mechanics of lying, it seems that any attempt to build a lie-detecting device is doomed to fail. “It’s like trying to build an atomic bomb without knowing the theory of the atom,” Honts says.

Take the polygraph. It functions today on the same principles as when it was conceived in 1921: providing a continuous recording of vital signs, including blood pressure, heart rate, and perspiration. But the validity of the polygraph approach has been questioned almost since its inception. It records the signs of arousal, and while these may be indications that a subject is lying—dissembling can be stressful—they might also be signs of anger, fear, even sexual excitement. “It’s not deception, per se,” says Judee Burgoon, Nunamaker’s research partner at the University of Arizona. “But that little caveat gets lost in the shuffle.”

The US Army founded a polygraph school in 1951, and the government later introduced the machine as an employee-screening tool. Indeed, according to some experts, the polygraph can detect deception more than 90 percent of the time—albeit under very strictly defined criteria. “If you’ve got a single issue, and the person knows whether or not they’ve shot John Doe,” Honts says, “the polygraph is pretty good.” Experienced polygraph examiners like Phil Houston, legendary within the CIA for his successful interrogations, are careful to point out that the device relies on the skill of the examiner to produce accurate results—the right kind of questions, the experience to know when to press harder and when the mere presence of the device can intimidate a suspect into telling the truth. Without that, a polygraph machine is no more of a lie-detector than a rubber truncheon or a pair of pliers.

As a result, although some state courts allow them, polygraph examinations have rarely been admitted as evidence in federal court; they’ve been dogged by high false-positive rates, and notorious spies, including CIA mole Aldrich Ames, have beaten the tests. In 2003 the National Academy of Sciences reported that the evidence of polygraph accuracy was “scanty and scientifically weak” and that, while the device might be used effectively in criminal investigations, as a screening tool it was practically useless. By then, other devices and techniques that had been touted as reliable lie detectors—voice stress analysis, pupillometry, brain scanning—had also either been dismissed as junk science or not fully tested.

But spooks and cops remain desperate for technology that could boost their rate of success even a couple of points above chance. That’s why, in 2006, project managers from the Army’s polygraph school—by then renamed the Defense Academy for Credibility Assessment—approached Nunamaker and Burgoon. The government wanted them to build a new machine, a device that could sniff out liars without touching them and that wouldn’t need a trained human examiner: a polygraph for the 21st century.

Link: What We Should Fear

Each December for the past fifteen years, the literary agent John Brockman has pulled out his Rolodex and asked a legion of top scientists and writers to ponder a single question: What scientific concept would improve everybody’s cognitive tool kit? (Or: What have you changed your mind about?) This year, Brockman’s panelists (myself included) agreed to take on the subject of what we should fear. There’s the fiscal cliff, the continued European economic crisis, the perpetual tensions in the Middle East. But what about the things that may happen in twenty, fifty, or a hundred years? The premise, as the science historian George Dyson put it, is that “people tend to worry too much about things that it doesn’t do any good to worry about, and not to worry enough about things we should be worrying about.” A hundred fifty contributors wrote essays for the project. The result is a recently published collection, “What *Should* We Be Worried About?” available without charge at John Brockman’s

A few of the essays are too glib; it may sound comforting to say that ”the only thing we need to worry about is worry itself” (as several contributors suggested), but anybody who has lived through Chernobyl or Fukushima knows otherwise. Surviving disasters requires contingency plans, and so does avoiding them in first places. But many of the essays are insightful, and bring attention to a wide range of challenges for which society is not yet adequately prepared.

One set of essays focusses on disasters that could happen now, or in the not-too-distant future. Consider, for example, our ever-growing dependence on the Internet. As the philosopher Daniel Dennett puts it:

We really don’t have to worry much about an impoverished teenager making a nuclear weapon in his slum; it would cost millions of dollars and be hard to do inconspicuously, given the exotic materials required. But such a teenager with a laptop and an Internet connection can explore the world’s electronic weak spots for hours every day, almost undetectably at almost no cost and very slight risk of being caught and punished.

As most Internet experts realize, the Internet is pretty safe from natural disasters because of its redundant infrastructure (meaning that there are many pathways by which any given packet of data can reach its destination) but deeply vulnerable to a wide range of deliberate attacks, either by censoring governments or by rogue hackers. (Writing on the same point, George Dyson makes the excellent suggestion of calling for a kind of emergency backup Internet, “assembled from existing cell phones and laptop computers,” which would allow the transmission of text messages in the event that the Internet itself was brought down.)

We might also worry about demographic shifts. Some are manifest, like the graying of the population (mentioned in Rodney Brooks’s essay) and the decline in the global birth rate (highlighted by Matt Ridley, Laurence Smith, and Kevin Kelly). Others are less obvious. The evolutionary psychologist Robert Kurzban, for example, argues that the rising gender imbalance in China (due to the combination of early-in-pregnancy sex-determination, abortion, the one-child policy, and a preference for boys) is a growing problem that we should all be concerned about. As Kurzban puts it, by some estimates, by 2020 “there will be 30 million more men than women on the mating market in China, leaving perhaps up to 15% of young men without mates.” He also notes that “cross-national research shows a consistent relationship between imbalanced sex ratios and rates of violent crime. The higher the fraction of unmarried men in a population, the greater the frequency of theft, fraud, rape, and murder.” This in turn tends to lead to a lower G.D.P., and, potentially, considerable social unrest that could ripple around the world. (The same of course could happen in any country in which prospective parents systematically impose a preference for boys.)

Another theme throughout the collection is what Stanford psychologist Brian Knutson called “metaworry”: the question of whether we are psychologically and politically constituted to worry about what we most need to worry about.

In my own essay, I suggested that there is good reason to think that we are not inclined that way, both because of an inherent cognitive bias that makes us focus on immediate concerns (like getting our dishwasher fixed) to the diminishment of our attention to long-term issues (like getting enough exercise to maintain our cardiovascular fitness) and because of a chronic bias toward optimism known as a “just-world fallacy” (the comforting but unrealistic idea that moral actions will invariably lead to just rewards). In a similar vein, the anthropologist Mary Catherine Bateson argues that “knowledgeable people expected an eventual collapse of the Shah’s regime in Iran, but did nothing because there was no pending date. In contrast, many prepared for Y2K because the time frame was so specific.” Furthermore, as the historian of ideas Noga Arikha puts it, “our world is geared at keeping up with a furiously paced present with no time for the complex past,” leading to a cognitive bias that she calls “presentism.”

As a result, we often move toward the future with our eyes too tightly focussed on the immediate to care much about what might happen in the coming century or two—despite potentially huge consequences for our descendants. As Knutson says, his metaworry

is that actual threats [to our species] are changing much more rapidly than they have in the ancestral past. Humans have created much of this environment with our mechanisms, computers, and algorithms that induce rapid, “disruptive,” and even global change. Both financial and environmental examples easily spring to mind.… Our worry engines [may] not retune their direction to focus on these rapidly changing threats fast enough to take preventative action.

The cosmologist Max Tegmark wondered what will happen “if computers eventually beat us at all tasks, developing superhuman intelligence?” As Tegmark notes, there is “little doubt that that this can happen: our brains are a bunch of particles obeying the laws of physics, and there’s no physical law precluding particles from being arranged in ways that can perform even more advanced computations.” That so-called singularity—machines becoming smarter than people—could be, as he puts it, “the best or worst thing ever to happen to life as we know it, so if there’s even a 1% chance that there’ll be a singularity in our lifetime, I think a reasonable precaution would be to spend at least 1% of our GDP studying the issue and deciding what to do about it.” Yet, “we largely ignore it, and are curiously complacent about life as we know it getting transformed.”

The sci-fi writer Bruce Sterling tells us not to be not afraid, because

Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s “minds on nonbiological substrates” that might allegedly have the “computational power of a human brain.” A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there’s no there there.

But Sterling’s optimism has little to do with reality. One leading artificial intelligence researcher recently told me that there was roughly a trillion dollars “to be made as we move from keyword search to genuine [A.I.] question answering based on the web.” Google just hired Ray Kurzweil to ramp up their investment in artificial intelligence, and although nobody has yet built a machine with the computational power of the human brain, at least three separate groups are actively trying, with many parties expecting success sometime in the next century.

Link: "Biological Intelligence is a Fleeting Phase in the Evolution of the Universe"

During an epoch of dramatic climate change 200,000 years ago, Homo sapiens (modern humans) evolved in Africa. Several leading scientists are asking: Is the human species entering a new evolutionary, post-biological inflection point?

Paul Davies, a British-born theoretical physicist, cosmologist, astrobiologist and Director of the Beyond Center for Fundamental Concepts in Science and Co-Director of the Cosmology Initiative at Arizona State University, says in his new book The Eerie Silence that any aliens exploring the universe will be AI-empowered machines. Not only are machines better able to endure extended exposure to the conditions of space, but they have the potential to develop intelligence far beyond the capacity of the human brain.

"I think it very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of the universe," Davies writes. "If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature."

In the current search for advanced extraterrestrial life SETI experts say the odds favor detecting alien AI rather than biological life because the time between aliens developing radio technology and artificial intelligence would be brief.  

“If we build a machine with the intellectual capability of one human, then within 5 years, its successor is more intelligent than all humanity combined,” says Seth Shostak, SETI chief astronomer. “Once any society invents the technology that could put them in touch with the cosmos, they are at most only a few hundred years away from changing their own paradigm of sentience to artificial intelligence,” he says.

ET machines would be infinitely more intelligent and durable than the biological intelligence that created them. Intelligent machines would be immortal, and would not need to exist in the carbon-friendly “Goldilocks Zones” current SETI searches focus on. An AI could self-direct its own evolution, each “upgrade” would be created with the sum total of its predecessor’s knowledge preloaded.

"I think we could spend at least a few percent of our time… looking in the directions that are maybe not the most attractive in terms of biological intelligence but maybe where sentient machines are hanging out." Shostak thinks SETI ought to consider expanding its search to the energy- and matter-rich neighborhoods of hot stars, black holes and neutron stars. 

Before the year 2020, scientists are expected to launch intelligent space robots that will venture out to explore the universe for us.

"Robotic exploration probably will always be the trail blazer for human exploration of far space," says Wolfgang Fink, physicist and researcher at Caltech. "We haven’t yet landed a human being on Mars but we have a robot there now. In that sense, it’s much easier to send a robotic explorer. When you can take the human out of the loop, that is becoming very exciting."

As the growing global population continues to increase the burden on the Earth’s natural resources, senior curator at the Smithsonian National Air and Space Museum, Roger Launius, thinks that we’ll have to alter human biology to prepare to colonize space. 

Launius  looks at the historical debate surrounding human colonization of the solar system. Experiments have shown that certain life forms can survive in space. Recently, British scientists found that bacteria living on rocks taken from Britain’s Beer village were able to survive 553 days in space, on the exterior of the International Space Station (ISS). The microbes returned to Earth alive, proving they could withstand the harsh environment. 

Humans, on the other hand, are unable to survive beyond about a minute and a half in space without significant technological assistance. Other than some quick trips to the moon and the ISS, astronauts haven’t spent too much time too far away from Earth. Scientists don’t know enough yet about the dangers of long-distance space travel on human biological systems. A one-way trip to Mars, for example, would take approximately six months. That means astronauts will be in deep space for more than a year with potentially life-threatening consequences.

Launius, who calls himself a cyborg for using medical equipment to enhance his own life, says the difficult question is knowing where to draw the line in transforming human biological systems to adapt to space.

Link: Why We Can't Solve Big Problems

…Apollo was not seen only as a victory for one of two antagonistic ideologies. Rather, the strongest emotion at the time of the moon landings was of wonder at the transcendent power of technology. From his perch in Lausanne, Switzerland, the writer Vladimir Nabokov cabled the New York Times, ”Treading the soil of the moon, palpating its pebbles, tasting the panic and splendor of the event, feeling in the pit of one’s stomach the separation from terra—these form the most romantic sensation an explorer has ever known.”

To contemporaries, the Apollo program occurred in the context of a long series of technological triumphs. The first half of the century produced the assembly line and the airplane, penicillin and a vaccine for tuberculosis; in the middle years of the century, polio was on its way to being eradicated; and by 1979 smallpox would be eliminated. More, the progress seemed to possess what Alvin Toffler dubbed an “accelerative thrust" in Future Shock, published in 1970. The adjectival swagger is pardonable: for decades, technology had been increasing the maximum speed of human travel. During most of history, we could go no faster than a horse or a boat with a sail; by the First World War, automobiles and trains could propel us at more than 100 miles an hour. Every decade thereafter, cars and planes sped humans faster. By 1961, a rocket-powered X-15 had beenpiloted to more than 4,000 miles per hour; in 1969, the crew of Apollo 10 flew at 25,000. Wasn’t it the very time to explore the galaxy—”to blow this great blue, white, green planet or to be blown from it,” as Saul Bellow wrote in Mr. Sammler’s Planet (also 1970)

Since Apollo 17's flight in 1972, no humans have been back to the moon, or gone anywhere beyond low Earth orbit. No one has traveled faster than the crew of Apollo 10. (Since the last flight of the supersonic Concorde in 2003, civilian travel has become slower.) Blithe optimism about technology’s powers has evaporated, too, as big problems that people had imagined technology would solve, such as hunger, poverty, malaria, climate change, cancer, and the diseases of old age, have come to seem intractably hard.

I remember sitting in my family’s living room in Berkeley, California, watching the liftoff of Apollo 17. I was five; my mother admonished me not to stare at the fiery exhaust of the Saturn 5 rocket. I vaguely knew that this was the last of the moon missions—but I was absolutely certain that there would be Mars colonies in my lifetime. What happened? 

That something happened to humanity’s capacity to solve big problems is a commonplace. Recently, however, the complaint has developed a new stridency among Silicon Valley’s investors and entrepreneurs, although it is usually expressed a little differently: people say there is a paucity of real innovations. Instead, they worry, technologists have diverted us and enriched themselves with trivial toys. 

The motto of Founders Fund, a venture capital firm started by Peter Thiel, a cofounder of PayPal, is “We wanted flying cars—instead we got 140 characters.” Founders Fund matters, because it is the investment arm of what is known locally as the “PayPal Mafia,” currently the dominant faction in Silicon Valley, which remains the most important area on the planet for technological innovation. (Other members include Elon Musk, the founder of SpaceX and Tesla Motors; Reid Hoffman, executive chairman of LinkedIn; and Keith Rabois, chief operating officer of the mobile payments company Square.) Thiel is caustic: last year he told the New Yorker that he didn’t consider the iPhone a technological breakthrough. “Compare [it] with the Apollo program,” he said.The Internet is “a net plus—but not a big one.” Twitter gives 500 people “job security for the next decade,” but “what value does it create for the entire economy?” And so on. Max Levchin, another cofounder of PayPal, says, “I feel like we should be aiming higher. The founders of a number of startups I encounter have no real intent of getting anywhere huge … There’s an awful lot of effort being expended that is just never going to result in meaningful, disruptive innovation.”

But Silicon Valley’s explanation of why there are no disruptive innovations is parochial and reductive: the markets—in particular, the incentives that venture capital provides entrepreneurs—are to blame. According to Founders Fund’s manifesto, “What Happened to the Future?,” written by Bruce Gibney, a partner at the firm: “In the late 1990s, venture portfolios began to reflect a different sort of future … Venture investing shifted away from funding transformational companies and toward companies that solved incremental problems or even fake problems … VC has ceased to be the funder of the future, and instead become a funder of features, widgets, irrelevances.” Computers and communications technologies advanced because they were well and properly funded, Gibney argues. But what seemed futuristic at the time of Apollo 11 ”remains futuristic, in part because these technologies never received the sustained funding lavished on the electronics industries.” 

The argument, of course, is wildly hypocritical. PayPal’s capos made their fortunes in public stock offerings and acquisitions of companies that did more or less trivial things. Levchin’s last startup, Slide, was a Founders Fund investment: it was acquired by Google in 2010 for about $200 million and shuttered earlier this year. It developed Facebook widgets such as SuperPoke and FunWall. 

But the real difficulty with Silicon Valley’s explanation is that it is insufficient to the case. The argument that venture capitalists lost their appetite for risky but potentially important technologies clarifies what’s wrong with venture capitaland tells us why half of all funds have provided flat or negative returns for the last decade. It also usefully explains how a collapse in nerve reduced the scope of the companies that got funded: with the exception of Google (which wants to “organize the world’s information and make it universally accessible and useful”), the ambitions of startups founded in the last 15 years do seem derisory compared with those of companies like Intel, Apple, and Microsoft, founded from the 1960s to the late 1970s. (Bill Gates, Microsoft’s founder, promised to “put a computer in every home and on every desktop,” and Apple’s Steve Jobs said he wanted to make the “best computers in the world.”) But the Valley’s explanation conflates all of technology with the technologies that venture capitalists like: traditionally, as Gibney concedes, digital technologies. Even during the years when VCs were most risk-happy, they preferred investments that required little capital and offered an exit within eight to 10 years. The venture capital business has always struggled to invest profitably in technologies, such as biotechnology and energy, whose capital requirements are large and whose development is uncertain and lengthy; and VCs have never funded the development of technologies that are meant to solve big problems and possess no obvious, immediate economic value. The account is a partial explanation that forces us to ask: putting aside the personal-computer revolution, if we once did big things but do so no longer, then what changed?

Silicon Valley’s explanation has this fault, too: it doesn’t tell us what should be done to encourage technologists to solve big problems, beyond asking venture capitalists to make better investments. (Founders Fund promises to “run the experiment” and “invest in smart people solving difficult problems, often difficult scientific or engineering problems.”) Levchin, Thiel, and Garry Kasparov, the former world chess champion, had planned a book, to be titled The Blueprint, that would “explain where the world’s innovation has gone.” Originally intendedto be released in March of this year, it has been indefinitely postponed, according to Levchin, because the authors could not agree on a set of prescriptions. 

Let’s stipulate that venture-backed entrepreneurialism is essential to the development and commercialization of technological innovations. But it is not sufficient by itself to solve big problems, nor could its relative sickliness by itself undo our capacity for collective action through technology.

Link: Welcome to the Future Nauseous

Both science fiction and futurism seem to miss an important piece of how the future actually turns into the present. They fail to capture the way we don’t seem to notice when the future actually arrives.

Sure, we can all see the small clues all around us: cellphones, laptops, Facebook, Prius cars on the street. Yet, somehow, the future always seems like something that is going to happen rather than something that is happening; future perfect rather than present-continuousEven the nearest of near-term science fiction seems to evolve at some fixed receding-horizon distance from the present.

There is an unexplained cognitive dissonance between changing-reality-as-experienced and change as imagined, and I don’t mean specifics of failed and successful predictions.

My new explanation is this: we live in a continuous state of manufactured normalcy. There are mechanisms that operate — a mix of natural, emergent and designed — that work to prevent us from realizing that the future is actually happening as we speak.  To really understand the world and how it is evolving, you need to break through this manufactured normalcy field. Unfortunately, that leads, as we will see, to a kind of existential nausea.

The Manufactured Normalcy Field

Life as we live it has this familiar sense of being a static, continuous present. Our ongoing time travel (at a velocity of one second per second) never seems to take us to a foreign place. It is always 4 PM; it is always tea-time.

Of course, a quick look back to your own life ten or twenty years back will turn up all sorts of evidence that your life has, in fact, been radically transformed, both at a micro-level and the macro-level. At the micro-level, I now possess a cellphone that works better than Captain Kirk’s communicator, but I don’t feel like I am living in the future I imagined back then, even a tiny bit. For a macro example, back in the eighties, people used to paint scary pictures of the world with a few billion more people and water wars. I think I wrote essays in school about such things.  Yet we’re here now, and I don’t feel all that different, even though the scary predicted things are happening on schedule.  To other people (this is important).

Try and reflect on your life. I guarantee that you won’t be able to feel any big change in your gut, even if you are able to appreciate it intellectually.

The psychology here is actually not that interesting.  A slight generalization of normalcy bias and denial of black-swan futures is sufficient.  What is interesting is how this psychological pre-disposition to believe in an unchanging, normal present doesn’t kill us.

How, as a species, are we able to prepare for, create, and deal with, the future, while managing to effectively deny that it is happening at all?

Futurists, artists and edge-culturists like to take credit for this. They like to pretend that they are the lonely, brave guardians of the species who deal with the “real” future and pre-digest it for the rest of us.

But this explanation falls apart with just a little poking. It turns out that the cultural edge is just as frozen in time as the mainstream. It is just frozen in a different part of the time theater, populated by people who seek more stimulation than the mainstream, and draw on imagined futures to feed their cravings rather than inform actual future-manufacturing.

The two beaten-to-death ways of understanding this phenomenon are due to McLuhan (“We look at the present through a rear-view mirror. We march backwards into the future.”) and William Gibson (“The future is already here; it is just unevenly distributed.”)

Both framing perspectives have serious limitations that I will get to. What is missing in both needs a name, so I’ll call the “familiar sense of a static, continuous present” a Manufactured Normalcy Field. For the rest of this post, I’ll refer to this as the Field for short.

So we can divide the future into two useful pieces: things coming at us that have been integrated into the Field, and things that have not. The integration kicks in at some level of ubiquity. Gibson got that part right.

Let’s call the crossing of the Field threshold by a piece of futuristic technology normalization(not to be confused with the postmodernist sense of the term, but related to the mathematical sense). Normalization involves incorporation of a piece of technological novelty into larger conceptual metaphors built out of familiar experiences.

A simple example is commercial air travel.

Link: Warren Ellis: How To See The Future

The concept of calling an event Improving Reality is one of those great science fiction ideas. Twenty five years ago, you’d have gone right along with the story that, in 2012, people will come to a tech-centric town to talk about how to improve reality. Being able to locally adjust the brightness of the sky. Why wouldn’t you? That’s the stuff of the consensus future, right there. The stories we agree upon. Like how in old science fiction stories Venus was always a “green hell” of alien jungle, and Mars was always an exotic red desert crisscrossed by canals.

In reality, of course, Venus is a high-pressure shithole that we’re technologically a thousand years away from being able to walk on, and there’s bugger all on Mars. Welcome to JG Ballard’s future, fast becoming a consensus of its own, wherein the future is intrinsically banal. It is, essentially, the sensible position to take right now.

A writer called Ventakesh Rao recently used the term “manufactured normalcy” to describe this. The idea is that things are designed to activate a psychological predisposition to believe that we’re in a static and dull continuous present. Atemporality, considered to be the condition of the early 21stcentury. Of course Venus isn’t a green hell – that would be too interesting, right? Of course things like Google Glass and Google Gloves look like props from ill-received science fiction film and tv from the 90s and 2000’s. Of course getting on a plane to jump halfway across the planet isn’t a wildly different experience from getting on a train from London to Scotland in the 1920s – aside from the radiation and groping.

We hold up iPhones and, if we’re relatively conscious of history, we point out that this is an amazing device that contains a live map of the world and the biggest libraries imaginable and that it’s an absolute paradigm shift in personal communication and empowerment. And then some knob says that it looks like something from Star Trek Next Generation, and then someone else says that it doesn’t even look as cool as Captain Kirk’s communicator in the original and then someone else says no but you can buy a case for it to make it look like one and you’re off to the manufactured normalcy races, where nobody wins because everyone goes to fucking sleep.

And reality does not get improved, does it?

But I’ll suggest to you something. The theories of atemporality and manufactured normalcy and zero history can be short-circuited by just one thing.

Looking around.

Ballardian banality comes from not getting the future that we were promised, or getting it too late to make the promised difference.

This is because we look at the present day through a rear-view mirror. This is something Marshall McLuhan said back in the Sixties, when the world was in the grip of authentic-seeming future narratives. He said, “We look at the present through a rear-view mirror. We march backwards into the future.”

He went on to say this, in 1969, the year of the crewed Moon landing: “Because of the invisibility of any environment during the period of its innovation, man is only consciously aware of the environment that has preceded it; in other words, an environment becomes fully visible only when it has been superseded by a new environment; thus we are always one step behind in our view of the world. The present is always invisible because it’s environmental and saturates the whole field of attention so overwhelmingly; thus everyone is alive in an earlier day.”

Three years earlier, Philip K Dick wrote a book called Now Wait For Last Year.

Let me try this on you:

The Olympus Mons mountain on Mars is so tall and yet so gently sloped that, were you suited and supplied correctly, ascending it would allow you to walk most of the way to space. Mars has a big, puffy atmosphere, taller than ours, but there’s barely anything to it at that level. 30 Pascals of pressure, which is what we get in an industrial vacuum furnace here on Earth. You may as well be in space. Imagine that. Imagine a world where you could quite literally walk to space.

That’s actually got a bit more going for it, as an idea, than exotic red deserts and canals. Imagine living in a Martian culture for a moment, where this thing is a presence in the existence of an entire sentient species. A mountain that you cannot see the top of, because it’s a small world and the summit wraps behind the horizon. Imagine settlements creeping up the side of Olympus Mons. Imagine battles fought over sections of slope. Generations upon generations of explorers dying further and further up its height, technologies iterated and expended upon being able to walk to within leaping distance of orbital space. Manufactured normalcy would suggest that, if we were the Martians, we would find this completely dull within ten years and bitch about not being able to simply fart our way into space.

Link: "We want to build a Planetary Nervous System"

Interview with Dirk Helbing, Chair of Sociology, in particular of Modeling and Simulation, at ETH Zürich, the Swiss Federal Institute of Technology Zurich (in German: Eidgenössische Technische Hochschule Zürich), Switzerland.

As Scientific Coordinator, you are one of the main driving forces behind FuturICT, one of Europe’s Future and Emerging Technologies (FET) Flagship Pilot research initiatives. This is one of the most ambitious, comprehensive, timely and revolutionary scientific efforts ever attempted. In the context of the communication and socio-economic challenges faced by humanity in the 21st Century, what can you highlight about the visionary goals of this enterprise?

Humanity has invested billions to reveal the forces of nature using large particle colliders. Today, we can send men to the moon, but financial crises, terrorism, global environmental change and other problems suggest that we need to pay more attention to what is happening on Earth.

The majority of pressing problems in our world, such as economic instability, wars and disease spreading, are related to human behaviour and the dense interconnectedness of our society. Human knowledge of how society and the economy work, however, has serious gaps. We also do not truly understand the relationship between society and the globally pervasive technological system connecting us. It is imperative that we prioritise closing these knowledge gaps, to help us mitigate future crises.

To achieve this, we believe that a large-scale scientific effort such as FuturICT is needed to bring the top researchers from the social, computational, natural, and engineering sciences together. By combining expertise from across these fields, we hope to open up the doors to a new understanding of society’s behaviour, and how we can best manage our connected social and technological world in a sustainable manner.

Some of the objectives stated by FuturICT seem to belong to the realm of science fiction for the everyman, what concrete examples resulting from FuturICT can you give of real life developments to be experienced by the lay user? Broadly, what are the specific societal and economic leverage effects pursued by FuturICT?

This project is indeed unprecedented. But it has already succeeded in uniting researchers across a wide range of disciplines to work together towards this ambitious goal, and therefore has the potential to make advances which have not been possible before.

We are planning to create interconnected Observatories to explore possible crises and opportunities relating to our financial and economic system, the social system, health, environment, and so on. These observatories will, for example, detect advance warning signs by mining large amounts of system-relevant data. They will allow citizens and policy makers to explore policy options and their possible impacts, including undesirable side effects that should be avoided. We believe that, in many cases, systemic crises could be prevented or mitigated if our responses were quicker and we had the right technologies to support the assessment of complex systems with their often counter-intuitive behavior.

In the past, we have been quite successful in developing new approaches to avoid crowd disasters, for example. We are confident that we will be able to identify impending conflicts, make short-term forecasts for disease spreading (based on recent discoveries in network theory), contribute to a more resilient financial architecture, and create more sustainable system designs – all for the benefit of the average person.

FuturICT will also create new socio-inspired information and communication systems and participatory platforms. Everybody will be able to benefit from these new technologies (in a similar way as most people are using the internet and mobile phones today). The economic potential of these innovations can be estimated by looking at the huge value of social networking companies.

(Source: zhaozhou, via whosecityisthis)

Link: James Howard Kunstler on Why Technology Won't Save Us

James Howard Kunstler is a novelist and critic who made his name trashing suburbia.  The Geography of Nowhere, published in 1994, is a wildly entertaining rant against strip malls, fast food, and America’s “happy motoring utopia.”  A decade later, he followed up with The Long Emergency, in which he argued persuasively that the decline of cheap oil will bring an end to civilized life as we know it.

In his latest book, Too Much Magic: Wishful Thinking, Technology, and the Fate of the Nation, Kunstler zeroes in on the central narrative of our time: that we are a highly evolved and technologically sophisticated civilization that will use our ingenuity and engineering expertise to come up with a solution to all the problems we face, from the end of cheap oil to the arrival of extreme climate change.  In other words, we’re not going to collapse into the dust bin of history like the Mayans or the Easter Islanders, because we have iPads and antibiotics.

In Kunstler’s view, this is a childish fantasy. “I’m serenely convinced that we are heading into what will amount to a ‘time out’ from technological progress as we know it,” Kunstler, who is 63, told me from his home in upstate New York. “A lot of these intoxications and deliriums and beliefs about technology are going to run into a wall of serious disappointment.” In short, Kunstler believes we are living on borrowed time – our banking and political systems are corrupt, our fossil fuel reserves are dwindling, the seas are rising – but we’re still partying like it’s 1959.  “Reality itself is very uncomfortable with fraud and untruths. Sooner or later, accounts really do have to be settled.”

…What, specifically, are those problems?
Peak oil and the exhaustion of material resources, climate change, the failure of the banking system, and political turmoil.

That’s quite a list!  In The Long Emergency, you argued that the end of cheap oil basically meant the end of modern life as we know it.  And yet, paradoxically, in the last few years there has been a boom in unconventional oil and gas.  How has that changed your views about the consequences of peak oil?
There is a stupendous volume of propaganda, and wishful thinking, that we can replace cheap oil from the Middle East with unconventional oil and unconventional gas – namely shale gas and shale oil.  I think the whole game really founders on money issues and capital issues, and this is very poorly understood by the public – including by people who ought to know better, like the mainstream media.  We’ve been seeing headlines lately suggesting that America will soon be energy independent.  Or that somehow America has magically become a net oil exporter.  This is nonsense.  The bottom line is, once you are trying to replace a shortage of easy-to-get conventional oil with unconventional, expensive oil, you’re stuck in a trap.  There is a paradox there: you really need a cheap oil economy to support an expensive oil economy.  Without that underlying cheap oil economy, we’re probably not going to get much of that expensive oil that’s in difficult to get places, or that requires some extreme and complex production method for getting it out of the ground.

People in the oil and gas industry argue that technological innovations like horizontal drilling are opening up new reserves all the time.
We’re not paying attention to the what is turning out to be the biggest shortage of all, which is the shortage of capital, based on the impairments of capital that are now underway in our disabled banking system.  What this all boils down to is that the money is not going to be there to do the things we’d hoped we’d be able to do.  There is only so far that wishing, and a strong will, will get you.  Ultimately, you do need to have some kind of accumulated wealth to accomplish these things, and that’s what capital is.  For several hundred years, we’ve had a pretty good system of accumulating it, accounting for it, storing it, and allocating it for useful purposes.  That’s what capitalism is about.  Capitalism – contrary to a lot of bullshit – is not a belief system.  It is not a religion.  It is simply a set of laws governing the behavior of surplus wealth.  What we have done in the last 25 years is introduce so many layers of untruth and accounting fraud that it’s no longer possible for money to truly represent the reality of accumulated wealth.  These lies are so deeply impairing the banking system, and all of the mechanisms that go with it, that we’re going to end up in a crisis of capital, even before we end up in a crisis of energy.

You write a lot about the failure of political leadership.  In particular, you go after President Obama for not appointing tough regulators to oversee Wall Street.
Obama has turned out to be fairly clueless.  He seems like a decent chap (and by the way, I voted for him).  I think that Obama’s failure to reestablish the rule of law in money matters is the most damaging thing that he’s done – and perhaps the most damaging thing that has happened in American politics in my lifetime.  Because once the rule of law is absent in money matters, then anything really goes in politics.  Any untruth is admissible.  Any distortion of reality is OK.  It is a profoundly dangerous place for a culture to go.  And there is no sign, as we enter the election of 2012, that he has any plans to rectify that.  We’re in a situation now where the rule of law is simply AWOL in American economic matters.

Our Underground Future
 Buried nuclear plants? Subterranean stadiums? The next great frontier may just lie beneath our feet.
Though the basic idea has existed for decades, new engineering techniques and an increasing interest in sustainable urban growth have created fresh momentum for what once seemed like a notion out of Jules Verne. And the world has witnessed some striking new achievements. The city of Almere, in the Netherlands, built an underground trash network that uses suction tubes to transport waste out of the city at 70 kilometers per hour, making garbage trucks unnecessary. In Malaysia, a sophisticated new underground highway tunnel doubles as a discharge tunnel for floodwater. In Germany, a former iron mine is being converted into a nuclear waste repository, while scientists around the world explore the possibility of building actual nuclear power plants underground.
Overall, though, the cause of the underground has encountered resistance, in large part because digging large holes and building things inside them tends to be extremely expensive and technically demanding. Boston offers perfect examples of the pluses and minuses of the endeavor: Putting the Post Office Square parking lot underground created a park and a beloved urban amenity, but the much more ambitious Big Dig turned out to be a drawn-out and unspeakably costly piece of urban reengineering.
And perhaps an even greater obstacle is the psychological one. As Ariaratnam put it, “Even in a condo tower, the penthouse on the top floor is the most attractive thing—everyone wants to be higher.” The underground, by contrast, calls to mind darkness, dirt, even danger—and when we imagine what it would look like for civilization to truly colonize it, we think of gophers and mole people. Little wonder that our politicians and urban designers don’t afford the underground anywhere near the level of attention and long-term vision they lavish on the surface. In a world where most people are accustomed to thinking of progress as pointing toward the heavens, it can be hard to retrain the imagination to aim downward.

Our Underground Future

Buried nuclear plants? Subterranean stadiums? The next great frontier may just lie beneath our feet.

Though the basic idea has existed for decades, new engineering techniques and an increasing interest in sustainable urban growth have created fresh momentum for what once seemed like a notion out of Jules Verne. And the world has witnessed some striking new achievements. The city of Almere, in the Netherlands, built an underground trash network that uses suction tubes to transport waste out of the city at 70 kilometers per hour, making garbage trucks unnecessary. In Malaysia, a sophisticated new underground highway tunnel doubles as a discharge tunnel for floodwater. In Germany, a former iron mine is being converted into a nuclear waste repository, while scientists around the world explore the possibility of building actual nuclear power plants underground.

Overall, though, the cause of the underground has encountered resistance, in large part because digging large holes and building things inside them tends to be extremely expensive and technically demanding. Boston offers perfect examples of the pluses and minuses of the endeavor: Putting the Post Office Square parking lot underground created a park and a beloved urban amenity, but the much more ambitious Big Dig turned out to be a drawn-out and unspeakably costly piece of urban reengineering.

And perhaps an even greater obstacle is the psychological one. As Ariaratnam put it, “Even in a condo tower, the penthouse on the top floor is the most attractive thing—everyone wants to be higher.” The underground, by contrast, calls to mind darkness, dirt, even danger—and when we imagine what it would look like for civilization to truly colonize it, we think of gophers and mole people. Little wonder that our politicians and urban designers don’t afford the underground anywhere near the level of attention and long-term vision they lavish on the surface. In a world where most people are accustomed to thinking of progress as pointing toward the heavens, it can be hard to retrain the imagination to aim downward.