Sunshine Recorder

Link: Out of Sight

The Internet delivered on its promise of community for blind people, but accessibility is easy to overlook.

I have been blind since birth. I’m old enough to have completed my early schooling at a time when going to a special school for blind kids was the norm. In New Zealand, where I live, there is only one school for the blind. It was common for children to leave their families when they were five, to spend the majority of the year far from home in a school hostel. Many family relationships were strained as a result. Being exposed to older kids and adults with the same disability as you, however, can supply you with exemplars. It allows the blind to see other blind people being successful in a wide range of careers, raising families and being accepted in their local community. A focal point, such as a school for the blind, helps foster that kind of mentoring.

The Internet has expanded the practical meaning of the word community. New technology platforms aren’t often designed to be accessible to people unlike the designers themselves, but that doesn’t mean they aren’t used by everyone who can. For blind people, the Internet has allowed an international community to flourish where there wasn’t much of one before, allowing people with shared experiences, interests, and challenges to forge a communion. Just as important, it has allowed blind people to participate in society in ways that have often otherwise been foreclosed by prejudice. Twitter has been at the heart of this, helping bring blind people from many countries and all walks of life together. It represents one of the most empowering aspects of the Internet for people with disabilities — its fundamentally textual nature and robust API supporting an ecosystem of innovative accessible apps has made it an equalizer. Behind the keyboard, no one need know you’re blind or have any other disability, unless you choose to let them know.

With the mainstreaming of blind kids now the norm, real-world networking opportunities are less frequent. That’s why the Internet has become such an important tool in the “blind community.” While there’s never been a better time in history to be blind, the best could be yet to come — provided the new shape the Internet takes remains accessible to everyone. In terms of being able to live a quality, independent life without sight, the Internet has been the most dramatic change in the lives of blind people since the invention of Braille. I can still remember having to go into a bank to ask the teller to read my bank balances to me, cringing as she read them in a very loud, slow voice (since clearly a blind person needs to be spoken to slowly).

Because of how scattered the blind community is and how much desire there is for us to share information and experiences, tech-savvy blind people were early Internet adopters. In the 1980s, as a kid with a 2400-baud modem, I’d make expensive international calls from New Zealand to a bulletin-board system in Pittsburgh that had been established specifically to bring blind people together. My hankering for information, inspiration, and fellowship meant that even as a cash-strapped student, I felt the price of the calls was worth paying.

Blind people from around the world have access to many technologies that get us online. Windows screen readers speak what’s on the screen, and optionally make the same information available tactually via a Braille display. Just as some sighted people consider themselves “visual learners,” so some blind people retain information better when it’s under their fingertips. Yes, contrary to popular belief, Braille is alive and well, having enjoyed a renaissance thanks to refreshable Braille display technology and products like commercial eBooks.

Outside the Windows environment, Apple is the exemplary player. Every Mac and iOS device includes a powerful screen reader called VoiceOver. Before Apple added VoiceOver to the iPhone 3GS in 2009, those of us who are blind saw the emergence of touch screens as a real threat to our hard-won gains. We’d pick up an iPhone, and as far as we were concerned, it was a useless piece of glass. Apple came up with a paradigm that made touch screens useable by the blind, and it was a game changer. Android has a similar product which, we hope, will continue to mature.

All this assistive technology means that the technological life I lead isn’t much different from that of a sighted person. I’m sitting at my desk in my office, writing this article in Microsoft Word. Because I lack the discipline to put my iPhone on “Do Not Disturb”, the iPhone is chiming at me from time to time, and I lean over to check the notification. Like other blind people, I use the Internet to further my personal and professional interests that have nothing to do with blindness.

But social trends haven’t kept up with technological ones. It’s estimated that in the United States, around 70 percent of working-aged blind people are unemployed. And the biggest barrier posed by blindness is not lack of sight – it’s other people’s ignorance. Since sight is such a dominant sense, a lot of potential employers close their eyes and think, “I couldn’t do this job if I couldn’t see, so she surely can’t either”. They forget that blindness is our normality. Deprive yourself of such a significant source of information by putting on a blindfold, and of course you’re going to be disorientated. But that’s not the reality we experience. It’s perfectly possible to function well without sight.

Just as there are societal barriers, we’ve yet to reach an accessible tech utopia – far from it. Blind people are inhibited in our full participation in society because not all online technologies are accessible to screen reading software. Most of this problem is due to poor design, some of it due to the choices made by content creators. Many blind people enjoy using Twitter, because text messages of 140 characters are at its core. If you tell me in a tweet what a delicious dinner you’ve had, I can read that and be envious. If you simply take a picture of your dinner and don’t include any text in the tweet, I’m out of the loop. Some blind people were concerned when reporters appeared to have caught a new feature that to allowed full tweets to be embedded in other tweets as an image, which would have meant the conversations which thrived on this platform would be out of reach for our screen readers. Twitter, to its credit, has reached out to us and made clear this was not the case. But even though it turned out to be a false alarm, the Twitter episode brought home to many of us just how fragile accessibility really is.

My voice is sometimes not heard on popular mainstream sites, due to a technology designed to thwart spam bots. Many fully-sighted people complain about CAPTCHA, the hard-to-read characters one sometimes needs to type into a form before you can submit it. Since these characters are graphical, they can stop a blind person in their tracks. Plug-ins can assist in many cases, and sometimes an audio challenge is offered. But the audio doesn’t help people who are deaf as well as blind. It’s encouraging to see an increasing number of sites trying mathematical or simple word puzzles to keep the spammers out, but allow disabled people in.

Many in the media seem wary of “post-text Internet,” a term popularized by economics blogger Felix Salmon in a post explaining why he was joining a television station, Fusion. “Text has had an amazing run, online, not least because it’s easy and cheap to produce,” he wrote. But for digital storytelling, “the possibilities are much, much greater.” Animation, videos, and images appeal to him as an arsenal of tools for a more “immersive” experience. If writers feel threatened by this new paradigm, he suggests, it’s because they’re unwilling to experiment with new models. But for blind people, the threat could be much more grave.

Some mobile apps and websites, despite offering information of interest, are inaccessible. Usually this is because links and buttons containing images don’t offer alternative textual labels. This is where the worry about about being shut out of a “post-text” internet feels most acute. While adding text is an easy way to ensure access to everyone, a wholesale shift in the Internet’s orientation from text to image would further enable designers’ often lax commitment to accessibility.I feel good about how the fusion of mainstream and assistive technologies has facilitated inclusion, but the pace of technological change is frenetic. Hard-won gains are easily lost. It’s therefore essential that we as a society come down on the side of technologies that allow access for all.

While we must be vigilant, there is cause to be optimistic. Blindness often begins to hit teenagers hard at the time their sighted peers are starting to drive. Certainly, not being able to get into a car and drive is a major annoyance of blindness. As a dad to four kids, it requires me to plan our outings a lot more carefully, because of the need to rely on public transport. Self-driving car technology has the potential to change the lives of blind people radically.While concerns persist about Google’s less than stellar track-record on accessibility, products like Google Glass could potentially be used to provide feedback based on a combination of object/face recognition and crowd-sourcing that could help us navigate unfamiliar surroundings more efficiently. Add to that the ability to fully control currently inaccessible, touch-screen-based appliances, and the “Internet of things” has potential for mitigating the impact of blindness – provided we as a society choose to proceed inclusively.

Not only has the Internet expanded the concept of “community”, it has redefined the ways in which traditional communities engage with one another. I don’t need to go to the supermarket and ask for a shelf-packer to help me shop, I can investigate the overwhelming number of choices of just about any product, and take my pick, totally independently. When I interact with any person or business online, they need not know I’m blind, unless I choose to tell them. To disclose or not to disclose is my choice, in any situation. That’s liberating and empowering.

But to fulfill all the promise of the Internet, we must be sure that just as someone in a wheelchair can negotiate a curb cut, open a door or use an elevator, so we must make sure the life-changing power of the Internet is available to us all – whether we see it, hear it, or touch it.

Link: The Lights Are On but Nobody’s Home

Who needs the Internet of Things? Not you, but corporations who want to imprison you in their technological ecosystem

Prepare yourself. The Internet of Things is coming, whether we like it or not apparently. Though if the news coverage — the press releases repurposed as service journalism, the breathless tech-blog posts — is to be believed, it’s what we’ve always wanted, even if we didn’t know it. Smart devices, sensors, cameras, and Internet connectivity will be everywhere, seamlessly and invisibly integrated into our lives, and it will make society more harmonious through the gain of a million small efficiencies. In this vision, the smart city isn’t plagued by deteriorating infrastructure and underfunded social services but is instead augmented with a dizzying collection of systems that ensure that nothing goes wrong. Resources will be apportioned automatically, mechanics and repair people summoned by the system’s own command. We will return to what Lewis Mumford described as a central feature of the Industrial Revolution: “the transfer of order from God to the Machine.” Now, however, the machines will be thinking for themselves, setting society’s order based on the false objectivity of computation.

According to one industry survey, 73 percent of Americans have not heard of the Internet of Things. Another consultancy forecasts $7.1 trillion in annual sales by the end of the decade. Both might be true, yet the reality is that this surveillance-rich environment will continue to be built up around us. Enterprise and government contracts have floated the industry to this point: To encourage us to buy in, sensor-laden devices will be subsidized, just as smartphones have been for years, since companies can make up the cost difference in data collection.

With the Internet of Things, promises of savings and technological empowerment are being implemented as forces of social control. In Chicago, this year’s host city for Cisco’s Internet of Things World Forum, Mayor Rahm Emanuel has used Department of Homeland Security grants to expand Chicago’s surveillance-camera system into the largest in the country, while the city’s police department, drawing on an extensive database of personal information about residents, has created a “heat list” of 400 people to be tracked for potential involvement in violent crime. In Las Vegas, new streetlights can alert surrounding people to disasters; they also have the ability to record video and audio of the surrounding area and track movements. Sometime this year, Raytheon plans to launch two aerostats — tethered surveillance blimps — over Washington, D.C. In typical fashion, this technology, pioneered in the battlefields of Afghanistan and Iraq, is being introduced to address a non-problem: the threat of enemy missiles launched at our capital. When they are not on the lookout for incoming munitions, the aerostats and their military handlers will be able to enjoy video coverage of the entire metropolitan area.

The ideological premise of the Internet of Things is that surveillance and data production equal a kind of preparedness. Any problem might be solved or pre-empted with the proper calculations, so it is prudent to digitize and monitor everything.

This goes especially for ourselves. The IoT promises users an unending capability to parse personal information, making each of us a statistician of the self, taking pleasure and finding reassurance in constant data triage. As with the quantified self movement, the technical ability for devices to collect and transmit data — what makes them “smart” — is its own achievement, the accumulation of data is represented as its own reward. “In a decade, every piece of apparel you buy will have some sort of biofeedback sensors built in it,” the co-founder of OMsignal told Nick Bilton, a New York Times technology columnist. Bilton notes that “many challenges must be overcome first, not the least of which is price.” But convincing people they need a shirt that can record their heart rate is apparently not one of these challenges.

Vessyl, a $199 drinking cup Valleywag’s Sam Biddle mockingly (and accurately) calls “a 13-ounce, Bluetooth-enabled, smartphone-syncing, battery-powered supercup,” analyzes the contents of whatever you put in it and tracks your hydration, calories, and the like in an app. There is not much reason to use Vessyl, beyond a fetish of the act of measurement. Few people see such a knowledge deficit about what they are drinking that they feel they should carry an expensive cup with them at all times. But that has not stopped Vessyl from being written up repeatedly in the press. Wired called Vessyl “a fascinating milestone … a peek into some sort of future.”

But what kind of future? And do we want it? The Internet of Things may require more than the usual dose of high-tech consumerist salesmanship, because so many of these devices are patently unnecessary. The improvements they offer to consumers — where they exist — are incremental, not revolutionary and always come at some cost to autonomy, privacy, or security. Between stories of baby monitors being hacked, unchecked backdoors, and search engines like Shodan, which allows one to crawl through unsecured, Internet-connected devices, from traffic lights to crematoria, it’s bizarre, if not disingenuous, to treat the ascension of the Internet of Things as foreordained progress.

As if anticipating this gap between what we need and what we might be taught to need, industry executives have taken to the IoT with the kind of grandiosity usually reserved for the Singularity. Their rhetoric is similarly eschatological. “Only one percent of things that could have an IP address do have an IP address today,” said Padmasree Warrior, Cisco’s chief technology and strategy officer, “so we like to say that 99 percent of the world is still asleep.” Maintaining the revivalist tone, she proposed, “It’s up to our imaginations to figure out what will happen when the 99 percent wakes up.”

Warrior’s remarks highlight how consequential marketing, advertising, and the swaggering keynotes of executives will be in creating the IoT’s consumer economy. The world will not just be exposed to new technologies; it will be woken up, given the gift of sight, with every conceivable object connected to the network. In the same way, Nest CEO Tony Fadell, commenting on his company’s acquisition by Google, wrote that his goal has always been to create a “conscious home” — “a home that is more thoughtful, intuitive.”

On a more prosaic level, “smart” has been cast as the logical, prudent alternative to dumb. Sure, we don’t need toothbrushes to monitor our precise brushstrokes and offer real-time reports, as the Bluetooth-enabled, Kickstarter-funded toothbrush described in a recent article in The Guardian can. There is no epidemic of tooth decay that could not be helped by wider access to dental care, better diet and hygiene, and regular flossing. But these solutions are so obvious, so low-tech and quotidian, as to be practically banal. They don’t allow for the advent of an entirely new product class or industry. They don’t shimmer with the dubious promise of better living through data. They don’t allow one to “transform otherwise boring dental hygiene activities into a competitive family game.” The presumption that 90 seconds of hygiene needs competition to become interesting and worth doing is among the more pure distillations of contemporary capitalism. Internet of Things devices, and the software associated with them, are frequently gamified, which is to say that they draw us into performances of productivity that enrich someone else.

In advertising from AT&T and others, the new image of the responsible homeowner is an informationally aware one. His house is always accessible and transparent to him (and to the corporations, backed by law enforcement, providing these services). The smart home, in turn, has its own particular hierarchy, in which the manager of the home’s smart surveillance system exercises dominance over children, spouses, domestic workers, and others who don’t have control of these tools and don’t know when they are being watched. This is being pushed despite the fact that violent crime has been declining in the United States for years, and those who do suffer most from crime — the poor — aren’t offered many options in the Internet of Things marketplace, except to submit to networked CCTV and police data-mining to determine their risk level.

But for gun-averse liberals, ensconced in low-crime neighborhoods, smart-home and digitized home-security platforms allow them to act out their own kind of security theater. Each home becomes a techno-castle, secured by the surveillance net.

The surveillance-laden house may rob children of essential opportunities for privacy and personal development. One AT&T video, for instance, shows a middle-aged father woken up in bed by an alert from his security system. He grabs his tablet computer and, sotto voce, tells his wife that someone’s outside. But it’s not an intruder, he says wryly. The camera cuts to shows a teenage girl, on the tail end of a date, talking to a boy outside the home. Will they or won’t they kiss? Suddenly, a garish bloom of light: the father has activated the home’s outdoor lights. The teens realize they are being monitored. Back in the master bedroom, the parents cackle. To be unmonitored is to be free — free to be oneself and to make mistakes. A home ringed with motion-activated lights, sensors, and cameras, all overseen by imperious parents, would allow for little of that.

In the conventional libertarian style, the Internet of Things offloads responsibilities to individuals, claiming to empower them with data, while neglecting to address collective, social issues. And meanwhile, corporations benefit from the increased knowledge of consumers’ habits, proclivities, and needs, even learning information that device owners don’t know themselves.

Tech industry doyen Tim O’Reilly has predicted that “insurance is going to be the native business model for the Internet of Things.” To enact this business model, companies will use networked devices to pull more data on customers and employees and reward behavior accordingly, as some large corporations, like BP, have already done in partnership with health-care companies. As the number of data sources proliferate, opportunities increase for behavioral management as well as on-the-fly price discrimination.

Through the dispersed system of mass monitoring and feedback, behaviors and cultures become standardized, directed at the algorithmic level. A British insurer called Drive Like a Girl uses in-car telemetry to track drivers’ habits. The company says that its data shows that women drive better and are cheaper to insure, so they deserve to pay lower rates. So far, perhaps, so good. Except that the European Union has instituted regulations stating that insurers can’t offer different rates based on gender, so Drive Like a Girl is using tracking systems to get around that rule, reflecting the fear of many IoT critics that vast data collection may help banks, realtors, stores, and other entities dodge the protections put in place by the Fair Credit Reporting Act, HIPPA, and other regulatory measures.

This insurer also exemplifies how algorithmic biases can become regressive social forces. From its name to its site design to how its telematics technology is implemented, Drive Like a Girl is essentializing what “driving like a girl” means — it’s safe, it’s pink, it’s happy, it’s gendered. It is also, according to this actuarial morality, a form of good citizenship. But what if a bank promised to offer loan terms to help someone “borrow like a white person,” premised on the notion that white people were associated with better loan repayments? We would call it discriminatory and question the underlying data and methodologies and cite histories of oppression and lack of access to banking services. With automated, IoT-driven marketplaces there is no room for taking into account these complex sensitivities.

As the Internet of Things expands, we may witness an uncomfortable feature creep. When the iPhone was introduced, few thought its gyroscopes would be used to track a user’s steps, sleep patterns, or heartbeat. Software upgrades or novel apps can be used to exploit hardware’s hidden capacities, not unlike the way hackers have used vending machines and HVAC systems to gain access to corporate computer networks. To that end, many smart thermostats use “geofencing” or motion sensors to detect when people are at home, which allows the device to adjust the temperature accordingly. A company, particularly a conglomerate like Google with its fingers in many networked pies, could use that information to serve up ads on other screens or nudge users towards desired behaviors. As Jathan Sadowski has pointed out here, the relatively trivial benefit of a fridge alerting you when you’ve run out of a product could be used to encourage you to buy specially advertised items. Will you buy the ice cream for which your freezer is offering a coupon? Or will you consult your health-insurance app and decide that it’s not worth the temporary spike in your premiums?

This combination of interconnectivity and feature creep makes Apple’s decision to introduce platforms for home automation and health-monitoring seem rather cunning. Cupertino is delegating much of the work to third-party device makers and programmers — just as it did with its music and app stores — while retaining control of the infrastructure and the data passing through it. (Transit fees will be assessed accordingly.) The writer and editor Matt Buchanan, lately of The Awl, has pointed out that, in shopping for devices, we are increasingly choosing among competing digital ecosystems in which we want to live. Apple seems to have apprehended this trend, but so have two other large industry groups — the Open Interconnect Consortium and the AllSeen alliance — with each offering its own open standard for connecting many disparate devices. Market competition, then, may be one of the main barriers to fulfilling the prophetic promise of the Internet of Things: to make this ecosystem seamless, intelligent, self-directed, and mostly invisible to those within it. For this vision to come true, you would have to give one company full dominion over the infrastructure of your life.

Whoever prevails in this competition to connect, well, everything, it’s worth remembering that while the smartphone or computer screen serves as an access point, the real work — the constant processing, assessment, and feedback mechanisms allowing insurance rates to be adjusted in real-time — is done in the corporate cloud. That is also where the control lies. To wrest it back, we will need to learn to appreciate the virtues of products that are dumb and disconnected once again.

Link: Free to Choose A or B

There has already been a lot written about the Facebook mood-manipulation study (here are three I found particularly useful; Tarleton Gillespie has a more extensive link collection here), and hopefully the outrage sparked by it will mark a turning point in users’ attitudes toward social-media platforms. People are angry about lots of different aspects of this study, but the main thing seems to be that Facebook distorts what users see for its own ends, as if users can’t be trusted to have their own emotional responses to what their putative friends post. That Facebook seemed to have been caught by surprise by the anger some have expressed — that people were not pleased to discover that their social lives are being treated as a petri dish by Facebook so that it can make its product more profitable — shows how thoroughly companies like Facebook see their users’ emotional reactions as their work product. How you feel using Facebook is, in the view of the company’s engineers, something they made, something that has little to do with your unique emotional sensitivities or perspective. From Facebook’s point of view, you are susceptible to coding, just like its interface. Getting you to be a more profitable user for the company is only a matter of affective optimization, a matter of tweaking your programming to get you pay more attention, spend more time on site, share more, etc.

But it turns out Facebook’s users don’t see themselves as compliant, passive consumers of Facebook’s emotional servicing, but instead had bought into the rhetoric that Facebook was a tool for communicating with their friends and family and structuring their social lives. When Facebook manipulates what users see — as they have done increasingly since the advent of its Newsfeed —  the tool becomes more and more useless for communication and becomes more of a curated entertainment product, engineered to sap your attention and suck out formatted reactions that Facebook can use to better sell audiences to advertisers. It may be that people like this product, the same way people like the local news or Transformers movies. Consumers expect those products to manipulate them emotionally. But that wasn’t part of the tacit contract in agreeing to use Facebook. If Facebook basically aspires to be as emotionally manipulative as The Fault in Our Stars, its product is much harder to sell as a means of personal expression and social connection. Facebook connects you to a zeitgeist it manufactures, not to the particular, uneven, unpredictable emotional landscape made up by your unique combination of friends. Gillespie explains this well:

social media, and Facebook most of all, truly violates a century-old distinction we know very well, between what were two, distinct kinds of information services. On the one hand, we had “trusted interpersonal information conduits” — the telephone companies, the post office. Users gave them information aimed for others and the service was entrusted to deliver that information. We expected them not to curate or even monitor that content, in fact we made it illegal to do otherwise…

On the other hand, we had “media content producers” — radio, film, magazines, newspapers, television, video games — where the entertainment they made for us felt like the commodity we paid for (sometimes with money, sometimes with our attention to ads), and it was designed to be as gripping as possible. We knew that producers made careful selections based on appealing to us as audiences, and deliberately played on our emotions as part of their design. We were not surprised that a sitcom was designed to be funny, even that the network might conduct focus group research to decide which ending was funnier (A/B testing?). But we would be surprised, outraged, to find out that the post office delivered only some of the letters addressed to us, in order to give us the most emotionally engaging mail experience.

Facebook takes our friends’ efforts to communicate with us and turns them into an entertainment product meant to make Facebook money.

Facebook’s excuse for filtering our feed is that users can’t handle the unfiltered flow of all their friends updates. Essentially, we took social media and massified it, then we needed Facebook to rescue us, restore the order we have always counted on editors, film and TV producers, A&R professionals and the like to provide for us. Our aggregate behavior, from the point of view of a massive network like Facebook’s, suggests we want to consume a distilled average out of our friends’ promiscuous sharing; that’s because from a data-analysis perspective, we have no particularities or specificity — we are just a set of relations, of likely matches and correspondences to some set of the billion other users. The medium massifies our tastes.

Facebook has incentive to make us feel like consumers of its service because that may distract us from the way in which our contributions to the network constitute unwaged labor. Choice is work, though we live in an ideological miasma that represents it as ever and always a form of freedom. In The New Way of the World, Dardot and Laval identify this as the quintessence of neoliberalist subjectivity: “life is exclusively depicted as the result of individual choices,” and the more choices we make, the more control we supposedly have over our lives. But those choices are structured not only by social contexts that exceed individual management but by entities like Facebook that become seen as part of the unchangeable infrastructure of contemporary life. “Neoliberal strategy consisted, and still consists, in constantly and systematically guiding the conduct of individuals as if they were always and everywhere engaged in relations of transaction and competition in a market,” Dardot and Laval write. Facebook has fashioned itself into a compelling implementation of that strategy. Its black-box algorithms induce and naturalize competition among users for each other’s attention, and its atomizing interface nullifies the notion of shared experience, collective subjectivity. The mood-manipulation study is a clear demonstration, as Cameron Tonkinwise noted on Twitter, that “There’s my Internet and then yours. There’s no ‘The Internet.” Everyone using Facebook see a window on reality customized for them, meant for maximal manipulation.

Not only does Facebook impose interpersonal competition under the rubric of sharing, it also imposes choice as continual A/B testing — which could be seen as the opposite of rational choice but, from the point of view of capital, it is its perfection. Without even intending it, you express a preference that has already been translated into useful market data to benefit a company, which is, of course, the true meaning of “rational”: profitable. You assume th erisks involved in the choice without realizing it. Did Facebook’s peppering your feed with too much happiness make you incredibly depressed? Who cares? Facebook got the information it sought from your response within the site.

A/B testing, the method used in the mood-manipulation study, is a matter of slotting consumers into control groups without telling them and varying some key variables to see if it instigates sales or prompts some other profitable behavior. It is a way of harvesting users’ preferences as uncompensated market research. A/B testing enacts an obligation to choose by essentially choosing for you and tracking how you respond to your forced choice. It lays bare the phoniness of the rhetoric of consumer empowerment through customization — in the end companies like Facebook treat choice not as an expression of autonomy but as a product input that can be voluntary or forced, and the meaning of choice is not your pleasure but the company’s profit. If your preferences about Facebook’s interface compromise its profitability, you will be forced to make different choices and reap what “autonomy” you can from those.

That would seem to run against the neoliberal strategy of using subjects’ consciousness of “free” choice to control them. But as Laval and Dardot point out, “the expansion of evaluative technology as a disciplinary mode rests on the fact that the more individual calculators are supposed to be free to choose, the more they must be monitored and evaluated to obviate their fundamental opportunism and compel them to identify their interests with the organizations employing them.” Hopefully the revelation of the mood-manipulation study will remind everyone that Facebook employs its users in the guise of catering to them.

Link: Google and the Trolley Problem

I expect that in a few years autonomous cars will not only be widely used but they will be mandatory. The vast majority of road accidents are caused by driver error, and when we see how much deaths and injury can be reduced by driverless cars we will rapidly decide that humans should not be allowed to be left in charge.

This gives rise to an interesting philosophical challenge. Somewhere in Mountain View, programmers are grappling with writing the algorithms that will determine the behaviour of these cars. These algorithms will decide what the car will do when the lives of the passengers in the car, pedestrians and other road users are at risk.

In 1942, the science fiction author Isaac Asimov proposed Three Laws of Robotics. These are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If the cars obey the Three Laws, then the algorithm cannot by action or inaction put the interests of the car above the interests of a human. But what if there are choices to be made between the interests of different people?

In 1967, the philosopher Philippa Foot posed what became known at “The Trolley Problem”.  Suppose you are the driver of a runaway tram (or “trolley car”) and you can only steer from one narrow track on to another; five men are working on the track you are on, and there is one man on the other; anyone on the track that the tram enters is bound to be killed. Should you allow the tram to continue on its current track and plough into the five people, or do you deliberately steer the tram onto the other track, so leading to the certain death of the other man?

Being a utilitarian, I find the trolley problem straightforward. It seems obvious to me that the driver should switch tracks, saving five lives at the cost of one. But many people do not share that intuition: for them, the fact that switching tracks requires an action by the driver makes it more reprehensible than allowing five deaths to happen through inaction.

If it were a robot in the drivers’ cab, then Asimov’s Three Laws wouldn’t tell the robot what to do. Either way, humans will be harmed, whether by action (one man) or inaction (five men).  So the First Law will inevitably be broken. What should the robot be programmed to do when it can’t obey the First Law?

This is no longer hypothetical: an equivalent situation could easily arise with a driverless car. Suppose a group of five children runs out into the road, and the car calculates that they can be avoided only by mounting the pavement, and killing a single pedestrian walking there.  How should the car be programmed to respond?

There are many variants on the Trolley Problem (analysed by Judith Jarvis Thompson), most of which will have to be reflected in the cars’ algorithms one way or another. For example, suppose a car finds on rounding a corner that it must either drive into an obstacle, leading to the certain death of its single passenger (the car owner), or it must swerve, leading to the death of an unknown pedestrian.  Many human drivers would instinctively plough into the pedestrian to save themselves. Should the car mimic the driver and put the interests of its owner first? Or should it always protect the interests of the stranger? Or should it decide who dies at random?  (Would you a buy a car programmed to put the interests of strangers ahead of the passenger, other things being equal?)

One option is to let the market decide: I can buy a utilitarian car, while you might prefer the deontological model.  Is it a matter of religious freedom to let people drive a car whose alogorithm reflects their ethical choices?

Perhaps the normal version of the car will be programmed with an algorithm that protects everyone equally and display advertisements to the passengers; while wealthy people will be able to buy the ‘premium’ version that protects its owner at the expense of other road users.  (This is not very different to choosing to drive an SUV, which protects the people inside the car at the expense of the people outside it.)

A related set of problems arise with the possible advent of autonomous drones to be used in war, in which weapons are not only pilotless but deploy their munitions using algorithms rather than human intervention. I think it possible that autonomous drones will eventually make better decisions than soldiers – they are less like to act in anger, for example – but the algorithms which they use will also require careful scrutiny.

Asimov later added Law Zero to his Three Laws: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” This deals with one variant on the Trolley Problem (“Is it right to kill someone to save the rest of humanity?”).  But it doesn’t answer the basic Trolley Problem, in which humanity is not at stake.  I suggest a more general Law Zero, which is consistent with Asimov’s version but which provides answers to a wider range of problems: “A robot must by action or inaction do the greatest good to the greatest number of humans, treating all humans, present and future, equally”.  Other versions of Law Zero would produce different results.

Whatever we decide, we will need to decide soon. Driverless cars are already on our streets.  The Trolley Problem is no longer purely hypothetical, and we can’t leave it to Google to decide. And perhaps getting our head around these questions about the algorithms for driverless cars will help establish some principles that will have wider application in public policy.

Link: The Melancholy of Subculture Society

If you crack open some of the mustier books about the Internet—you know the ones I’m talking about, the ones which invoke Roland Barthes and discuss the sexual transgressing of MUDs—one of the few still relevant criticisms is the concern that the Internet by uniting small groups will divide larger ones.

Surfing alone

You may remember this as the Bowling Alone thesis applied to the Internet; it got some traction in the late 1990s. The basic idea is: electronic entertainment devices grows in sophistication and inexpensiveness as the years pass, until by the 1980s and 1990s, they have spread across the globe and have devoured multiple generations of children; these devices are more pernicious than traditional geeky fares inasmuch as they are often best pursued solo. Spending months mastering Super Mario Brosall alone—is a bad way to grow up normal.

And then there were none

The 4 or 5 person Dungeons & Dragons party (with a dungeon master) gives way to the classic arcade with its heated duels and oneupsmanship; the arcade gives way to the flickering console in the bedroom with one playing Final Fantasy VIIalone. The increased graphical realism, the more ergonomic controllers, the introduction of genuinely challenging AI techniques… Trend after trend was rendering a human opponent unnecessary. And gamer after gamer was now playing alone.

Perhaps, the critic says, the rise of the Internet has ameliorated that distressing trend—the trends favored no connectivity at first, but then there was finally enough surplus computing power and bandwidth for massive connectivity to become the order of the day.

It is much more satisfactory and social to play MMORPGs on your PC than single-player RPGS, much more satisfactory to kill human players in Halo matches than alien AIs. The machines finally connect humans to humans, not human to machine. We’re forced to learn some basic social skills, to maintain some connections. We’re no longer retreating into our little cocoons, interacting with no humans.

Welcome to the N.H.K.!

But, the critic continues, things still are not well. We are still alienated from one another. The rise of the connected machines still facilitates withdrawal and isolation. It presents the specter of the hikikomorithe person who ceases to exist in the physical realm as much as possible. It is a Japanese term, of course. They are 5 years further in our future than we are (or perhaps one should say, were). Gibson writes, back in 2001:

The Japanese seem to the rest of us to live several measurable clicks down the time line. The Japanese are the ultimate Early Adopters, and the sort of fiction I write behooves me to pay serious heed to that. If you believe, as I do, that all cultural change is essentially technologically driven, you pay attention to the Japanese. They’ve been doing it for more than a century now, and they really do have a head start on the rest of us, if only in terms of what we used to call future shock (but which is now simply the one constant in all our lives).

Gibson also discusses the Mobile Girl and text messaging; that culture began really showing up in America around 2005—SidekicksTwitter etc. You can do anything with a cellphone: order food, do your job, read & write novels, maintain a lively social life, engage in social status envy (She has a smaller phone, and a larger collection of collectibles on her cellphone strap! OMG!)… Which is just another way of saying You can do anything without seeing people, just by writing digital messages. (And this in a country with one of the most undigitizable writing systems in existence!)

The hikikomori withdraws from all personal contact. The hikikomori does not hang out at the local pub, swilling down the brewskis as everyone cheers on the home team. The hikikomori is not gossiping at the rotary club nor with the Lions or mummers or Veterans or Knights. hikikomoris do none of that. They aren’t working, they aren’t hanging out with friends.

The Paradoxical solitude and omnipotence of the otaku, the new century’s ultimate enthusiast: the glory and terror inherent of the absolute narrowing of personal bandwidth. —William Gibson, Shiny balls of Mud (TATE 2002)

So what are they doing with their 16 waking hours a day?

Opting out

But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing…the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode. We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way up close and personal profiles of professional athletes strain so hard to find evidence of a rounded human life—outside interests and activities, values beyond the sport. We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small…[Tennis player Michael] Joyce is, in other words, a complete man, though in a grotesquely limited way…Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well. — David Foster WallaceThe String Theory (July 1996 Esquire)

They’re not preoccupied with our culture - they’re participating in their own subculture. It’s the natural progression of the otaku. They are fighting on Azeroth, or fiercely pursuing their dojinshi career, or… There are many subcultures linked and united by the Internet, for good and ill. For every charitable or benevolent subculture (eg. free software) there is one of mixed benefits (World of Warcraft), and one outright harmful (ex. fans of eating disorders, child pornography).

The point the critic wants to make is that life is short and a zero-sum game. You lose a third of the day to sleep, another third to making a living, and now you’ve little left. To be really productive, you can’t divide your energies across multiple cultures—you can’t be truly successful in mainstream culture, and at the same time be able to devote enough effort in the field of, say, mechanical models, to be called an Otaking. A straddler takes onto his head the overhead of learning and participating in both, and receives no benefits (he will suffer socially in the esteem of the normals, and will be able to achieve little in his hobby due to lack of time and a desire to not go overboard).

The otaku & hikikomori recognizes this dilemma and he chooses—to reject normal life! He rejects life in the larger culture for his subculture. It’s a simple matter of comparative advantage; it’s easier to be a big fish in a small pond than in a large one.

The bigger screen

Have you ever woken up from a dream that was so much more pleasant than real life that you wish you could fall back to sleep and return to the dream?…For some, World of Warcraft is like a dream they don’t have to wake up from - a world better than the real world because their efforts are actually rewarded –[Half Sigma, Status, masturbation, wasted time, and WoW]

EVE Online is unique in gaming in that we have always played on the same massive server in the same online universe since May 2003 when it first went live. We not only understand the harsh penalties for failure, but also how longevity and persistence is rewarded with success. When you have over 60,000 people on weekends dealing, scheming, and shooting each other it attracts a certain type of gamer. It’s not a quick fix kind of game. We enjoy building things that last, be they virtual spaceships or real life friendships that together translate into massive Empires and enduring legacies. Those of us who play understand that one man really can truly make a difference in our world. –Mark Seleene HeardVile Rat eulogy 2012

As ever more opt out, the larger culture is damaged. The culture begins to fragment back into pieces. The disconnect can be profound; an American anime geek has more in common with a Japanese anime geek (who is of a different ethnicity, a different culture, a different religion, a different language…) than he does with an American involved in the evangelical Christian subculture. There is essentially no common ground - our 2 countrymen probably can’t even agree on objective matters like governance or evolution!

With enough of these gaps, where is American or French culture? Such cultural identities take centuries to coalesce—France did not speak French until the 1900s (as The Discovery of France recounts), and Han China is still digesting and assimilating its many minorities and outlying regions. America, of course, had it relatively easy in starting with a small founder population which could just exterminate the natives.

The national identity fragments under the assault of burgeoning subcultures. At last, the critic beholds the natural endpoint of this process: the nation is some lines on a map, some laws you follow. No one particularly cares about it. The geek thinks, Meh: here, Canada, London, Japan, Singapore—as long as FedEx can reach me and there’s a good Internet connection, what’s the difference? (Nor are the technically-inclined alone in this.)

You can test this yourself. Tell yourself—the country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled. If your mental reply goes something like, Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest…, then you are headed down the path of opting out.

This is how the paradox works: the Internet breaks the larger culture by letting members flee to smaller subcultures. And the critics think this is bad. They like the broader culture, and deep down, furries and latex fetishists really bother them. They just plain don’t like those weirdo deviants.

But I can get a higher score!

In the future, everyone will be world-famous for 15 minutes.

Let’s look at another angle.

Monoculture

Irony has only emergency use. Carried over time, it is the voice of the trapped who have come to enjoy their cage.

One can’t opt out of culture. There is no view from nowhere. To a great extent, we are our cultural artifacts—our possessions, our complexes of memes, our habits and objects of disgust are all cultural. You are always part of a culture.

Suppose there were only 1 worldwide culture, with no subcultures. The overriding obsession of this culture will be… let’s make it money. People are absolutely obsessed with money—how it is made, acquired, degraded, etc. More importantly, status is defined just by how much you have earned in your life; in practice, tie-breakers include how fast you made it, what circumstances you made it in (everyone admires a person who became a billionaire in a depression more than a good-times billionaire, in the same way we admire the novelist in the freezing garret more than the comfortable academic), and so on.

This isn’t too absurd a scenario: subjects feed on themselves and develop details and complexity as effort is invested in them. Money could well absorb the collective efforts of 7 billion people—already many people act just this way.

But what effect does this have on people? I can tell you: the average person is going to be miserable. If everyone genuinely buys into this culture, then they have to be. Their talents at piano playing, or cooking, or programming, or any form of artistry or scholarly pursuit are denigrated and count for naught. The world has become too big - it did not use to be so big, people so powerless of what is going on:

"Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole.

Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….”

You can’t make a mark on it unless there are almost as many ways to make marks as there are persons.

To put it another way: women suffer enough from comparing themselves to media images. If you want a vision of this future, imagine everyone being an anorexic teenager who hates her body—forever.

We all value social esteem. We need to know somebody thinks well of us. We’re tribal monkeys; ostracism means death.

Jaron Lanier: I’d like to hypothesize one civilizing force, which is the perception of multiple overlapping hierarchies of status. I’ve observed this to be helpful in work dealing with rehabilitating gang members in Oakland. When there are multiple overlapping hierarchies of status there is more of a chance of people not fighting their superior within the status chain. And the more severe the imposition of the single hierarchy in people’s lives, the more likely they are to engage in conflict with one another. Part of America’s success is the confusion factor of understanding how to assess somebody’s status.

Steven Pinker: That’s a profound observation. There are studies showing that violence is more common when people are confined to one pecking order, and all of their social worth depends on where they are in that hierarchy, whereas if they belong to multiple overlapping groups, they can always seek affirmations of worth elsewhere. For example, if I do something stupid when I’m driving, and someone gives me the finger and calls me an asshole, it’s not the end of the world: I think to myself, I’m a tenured professor at Harvard. On the other hand, if status among men in the street was my only source of worth in life, I might have road rage and pull out a gun. Modernity comprises a lot of things, and it’s hard to tease them apart. But I suspect that when you’re not confined to a village or a clan, and you can seek your fortunes in a wide world, that is a pacifying force for exactly that reason.

Think of the people you know. How many of them can compete on purely financial grounds? How many can compare to the chimps at the top of the financial heap without feeling like an utter failure, a miserable loser? Not many. I can’t think of anyone I know who wouldn’t be at least a little unhappy. Some of them are pretty well off, but it’s awfully hard to compare with billionaires in their department. There’s no way to prove that this version of subcultures is the right one (perhaps fragmenting the culture fragments the possible status), but when I look at simple models, this version seems plausible to me and to explain some deep trends like monogamy.

Subcultures set you free

If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself. Enjoy your achievements as well as your plans. Keep interested in your own career, however humble; it is a real possession in the changing fortunes of time.

Having a society in which an artist can mingle as social equals with the billionaire and admit the Nobel scientists and the philanthropist is fundamental to our mental health! If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand.

It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and a cat may look at a king.

Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.

(Source: sunrec)

Technological idolatry is the most ingenuous and primitive of the three [higher forms of idolatry]; for its devotees (…) believe that their redemption and liberation depend upon material objects - in this case gadgets. Technological idolatry is the religion whose doctrines are promulgated, explicitly or by implication, in the advertisement pages of our newspapers and magazines - the source, we may add parenthetically, from which millions of men, women and children in the capitalistic countries derive their working philosophy of life. (…) So whole-hearted is the modern faith in technological idols that (despite all the lessons of mechanized warfare) it is impossible to discover in the popular thinking of our time any trace of the ancient and profoundly realistic doctrine of hubris and inevitable nemesis. There is a very general belief that, where gadgets are concerned, we can get something for nothing - can enjoy all the advantages of an elaborate, top-heavy and constantly advancing technology without having to pay for them by any compensating disadvantages.
— Aldous Huxley, The Perennial Philosophy

Link: Technology and Consumership

Today’s media, combined with the latest portable devices, have pushed serious public discourse into the background and hauled triviality to the fore, according to media theorist Arthur W Hunt. And the Jeffersonian notion of citizenship has given way to modern consumership.

Almantas Samalavicius: In your recently published book Surviving Technopolis, you discuss a number of important and overlapping issues that threaten the future of societies. One of the central themes you explore is the rise, dominance and consequences of visual imagery in public discourse, which you say undermines a more literate culture of the past. This tendency has been outlined and questioned by a large and growing number of social thinkers (Marshall McLuhan, Walter Ong, Jacques Ellul, Ivan Illich, Neil Postman and others). What do you see as most culturally threatening in this shift to visual imagery?

Arthur W. Hunt III: The shift is technological and moral. The two are related, as Ellul has pointed out. Computer-based digital images stem from an evolution of other technologies beginning with telegraphy and photography, both appearing in the middle of the nineteenth century. Telegraphy trivialized information by allowing it to come to us from anywhere and in greater volumes. Photography de-contextualized information by giving us an abundance of pictures disassociated from the objects from which they came. Cinema magnified Aristotle’s notion of spectacle, which he claimed to be the least artistic element in Poetics. Spectacle in modern film tends to diminish all other elements of drama (plot, character, dialogue and so on) in favour of the exploding Capitol building. Radio put the voice of both the President and the Lone Ranger into our living rooms. Television was the natural and powerful usurper of radio and quickly became the nucleus of the home, a station occupied by the hearth for thousands of years. Then the television split in two, three or four ways so that every house member had a set in his or her bedroom. What followed was the personal computer at both home and at work. Today we have portable computers in which we watch shows, play games, email each other and gaze at ourselves like we used to look at Hollywood stars. To a large extent, these technologies are simply extensions of our technological society. They act as Sirens of distraction. They push serious public discourse into the background and pull triviality to the foreground. They move us away from the Jeffersonian notion of citizenship, replacing it with modern capitalism’s ethic of materialistic desire or “consumership”. The great danger of all this, of course, is that we neglect the polis and, instead, waste our time with bread and circuses. Accompanying this neglect is the creation of people who spend years in school yet remain illiterate, at least by the standards we used to hold out for a literate person. The trivialization spreads out into other institutions, as Postman has argued, to schools, churches and politics. This may be an American phenomenon, but many countries look to America’s institutions for guidance.

AS: Philosopher and historian Ivan Illich – one of the most radical critics of modernity and its mythology – has emphasized the conceptual difference between tools, on one hand, and technology on the other, implying that the dominance and overuse of technology is socially and culturally debilitating. Economist E.F. Schumacher urged us to rediscover the beauty of smallness and the use of more humane, “intermediate technologies”. However, a chorus of voices seems to sink in the ocean of popular technological optimism and a stubborn self-generating belief in the power of progress. Your critique contains no call to go back to the Middle Ages. Nor do you suggest that we give anything away to technological advances. Rather, you offer a sound and balanced argument about the misuses of technology and the mindscape that sacrifices tradition and human relationships on the altar of progress. Do you see any possibility of developing a more balanced approach to the role of technology in our culture? Obviously, many are aware, even if cynically, that technological progress has its downsides, but what of its upsides?

AWH: Short of a nuclear holocaust, we will not be going back to the Middle Ages any time soon. Electricity and automobiles are here to stay. The idea is not to be anti-technology. Neil Postman once said to be anti-technology is like being anti-food. Technologies are extensions of our bodies, and therefore scale, ecological impact and human flourishing becomes the yardstick for technological wisdom. The conventional wisdom of modern progress favours bigger, faster, newer and more. Large corporations see their purpose on earth to maximize profits. Their goal is to get us addicted to their addictions. We can no longer afford this kind of wisdom, which is not wisdom at all, but foolishness. We need to bolster a conversation about the human benefits of smaller, slower, older and less. Europeans often understand this better than Americans, that is, they are more conscious of preserving living spaces that are functional, aesthetically pleasing and that foster human interaction. E.F. Schumacher gave us some useful phraseology to promote an economy of human scale: “small is beautiful,” “technologies with a human face” and “homecomers.” He pointed out that “labour-saving machinery” is a paradoxical term, not only because it makes us unemployed, but also because it diminishes the value of work. Our goal should be to move toward a “third-way” economic model, one of self-sufficient regions, local economies of scale, thriving community life, cooperatives, family owned farms and shops, economic integration between the countryside and the nearby city, and a general revival of craftsmanship. Green technologies – solar and wind power for example – actually can help us achieve this third way, which is actually a kind of micro-capitalism.

AS: Technologies developed by humans (e.g. television) continue to shape and sustain a culture of consumerism, which has now become a global phenomenon. As you insightfully observe in one of your essays, McLuhan, who was often misinterpreted and misunderstood as a social theorist hailed by the television media he explored in a great depth, was fully aware of its ill effects on the human personality and he therefore limited his children’s TV viewing. Jerry Mander has argued for the elimination of television altogether, nevertheless, this medium is alive and kicking and continues to promote an ideology of consumption and, what is perhaps most alarming, successfully conditioning children to become voracious consumers in a society where the roles of parents become more and more institutionally limited. Do you have any hopes for this situation? Can one expect that people will develop a more critical attitude toward these instruments, which shape them as consumers? Does social criticism of these trends play any role in an environment where the media and the virtual worlds of the entertainment industry have become so powerful?

AWH: Modern habits of consumption have created what Benjamin Barber calls an “ethos of infantilization”, where children are psychologically manipulated into early adulthood and adults are conditioned to remain in a perpetual state of adolescence. Postman suggested essentially the same thing when he wroteThe Disappearance of Childhood. There have been many books written that address the problems of electronic media in stunting a child’s mental, physical and spiritual development. One of the better recent ones is Richard Louv’s Last Child in the Woods. Another one is Anthony Esolen’s Ten Ways to Destroy the Imagination of Your Child. We have plenty of books, but we don’t have enough people reading them or putting them into practice. Raising a child today is a daunting business, and maybe this is why more people are refusing to do it. No wonder John Bakan, a law professor at the University of British Columbia, wrote a New York Times op-ed complaining, “There is reason to believe that childhood itself is now in crisis.” The other day I was listening to the American television program 60 Minutes. The reporter was interviewing the Australian actress Cate Blanchett. I almost fell out of my chair when she starkly told the reporter, “We don’t outsource our children.” What she meant was, she does not let someone else raise her children. I think she was on to something. In most families today, both parents work outside the home. This is a fairly recent development if you consider the entire span of human history. Industrialism brought an end to the family as an economic unit. First, the father went off to work in the factory. Then, the mother entered the workforce during the last century. Well, the children could not stay home alone, so they were outsourced to various surrogate institutions. What was once provided by the home economy (oikos) – education, heath care, child rearing and care of the elderly – came to be provided by the state. The rest of our needs – food, clothing, shelter and entertainment – came to be provided by the corporations. A third-way economic ordering would seek to revive the old notion of oikos so that the home can once again be a legitimate economic, educational and care-providing unit – not just a place to watch TV and sleep. In other words, the home would once again become a centre for production, not just consumption. If this every happened, one or both parents would be at home and little Johnny and sister Jane would work and play alongside their parents.

AS: I was intrigued by your insight into forms of totalitarianism depicted by George Orwell and Aldous Huxley. Though most authors who discussed totalitarianism during the last half of the century were overtaken by the Orwellian vision and praised this as most enlightening, the alternative Huxleyan vision of a self-inflicted, joyful and entertaining totalitarian society was far less scrutinized. Do you think we are entering into a culture where “totalitarianism with a happy face” as you call it prevails? If so, what consequences you foresee?

AWH: It is interesting to note that Orwell thought Huxley’s Brave New Worldwas implausible because he maintained that hedonistic societies do not last long, and that they are too boring. However, both authors were addressing what many other intellectuals were debating during the 1930s: what would be the social implications of Darwin and Freud? What ideology would eclipse Christianity? Would the new social sciences be embraced with as much exuberance as the hard sciences? What would happen if managerial science were infused into all aspects of life? What should we make of wartime propaganda? What would be the long-term effects of modern advertising? What would happen to the traditional family? How could class divisions be resolved? How would new technologies shape the future?

I happen to believe there are actually more similarities between the Orwell’s 1984 and Huxley’s Brave New World than there are differences. Both novels have as their backstory the dilemma of living with weapons of mass destruction. The novel 1984 imagines what would happen if Hitler succeeded. In Brave New World, the world is at a crossroads. What is it to be, the annihilation of the human race or world peace through sociological control? In the end, the world chooses a highly efficient authoritarian state, which keeps the masses pacified by maintaining a culture of consumption and pleasure. In both novels, the past is wiped away from public memory. In Orwell’s novel, whoever “controls the past controls the future.” In Huxley’s novel, the past has been declared barbaric. All books published before A.F. 150 (that is, 150 years after 1908 CE, the year the first Model T rolled off the assembly line) are suppressed. Mustapha Mond, the Resident Controller in Brave New World, declares the wisdom of Ford: “History is bunk.” In both novels, the traditional family has been radically altered. Orwell draws from Hitler Youth and the Soviets Young Pioneers to give us a society where the child’s loyalty to the state far outweighs any loyalty to parents. Huxley gives us a novel where the biological family does not even exist. Any familial affection is looked down upon. Everybody belongs to everybody, sexually and otherwise. Both novels give us worlds where rational thought is suppressed so that “war is peace”, “freedom is slavery” and “ignorance is strength” (1984). InBrave New World, when Lenina is challenged by Marx to think for herself, all she can say is “I don’t understand.” The heroes in both novels are malcontents who want to escape this irrationality but end up excluded from society as misfits. Both novels perceive humans as religious beings where the state recognizes this truth but channels these inclinations toward patriotic devotion. In1984, Big Brother is worshipped. In Brave New World, the Christian cross has been cut off at the top to form the letter “T” for Technology. When engaged in the Orgy-Porgy, everyone in the room chants, “Ford, Ford, Ford.” In both novels an elite ruling class controls the populace by means of sophisticated technologies. Both novels show us surveillance states where the people are constantly monitored. Sound familiar? Certainly, as Postman tells us in his foreword to Amusing Ourselves to Death, Huxley’s vision eerily captures our culture of consumption. But how long would it take for a society to move from a happy faced totalitarianism to one that has a mask of tragedy?

AS: Your comments on the necessity of the third way in our societies subjected to and affected by economic globalization seem to resonate with the ideas of many social thinkers I interviewed for this series. Many outstanding social critics and thinkers seem to agree that the notions of communism and capitalism have become stale and meaningless; further development of these paradigms lead us nowhere. One of your essays focuses on the old concept of “shire” and household economics. Do you believe in what Mumford called “the useful past”? And do you expect the growing movement that might be referred to as “new economics” to enter the mainstream of our economic thinking, eventually leading to changes in our social habits?

AWH: If the third way economic model ever took hold, I suppose it could happen in several ways. We will start with the most desirable way, and then move to less desirable. The most peaceful way for this to happen is for people to come to some kind of realization that the global economy is not benefiting them and start desiring something else. People will see that their personal wages have been stagnant for too long, that they are working too hard with nothing to show for it, that something has to be done about the black hole of debt, and that they feel like pawns in an incomprehensible game of chess. Politicians will hear their cries and institute policies that would allow for local economies, communities and families to flourish. This scenario is less likely to happen, because the multinationals that help fund the campaigns of politicians will not allow it. I am primarily thinking of the American reality in my claim here. Unless corporations have a change of mind, something akin to a religious conversion, we will not see them open their hearts and give away their power.

A more likely scenario is that a grassroots movement led by creative innovators begins to experiment with new forms of community that serve to repair the moral and aesthetic imagination distorted by modern society. Philosopher Alasdair MacIntyre calls this the “Benedict Option” in his book After Virtue. Morris Berman’s The Twilight of American Culture essentially calls for the same solution. Inspired by the monasteries that preserved western culture in Europe during the Dark Ages, these communities would serve as models for others who are dissatisfied with the broken dreams associated with modern life. These would not be utopian communities, but humble efforts of trial and error, and hopefully diverse according to the outlook of those who live in them. The last scenario would be to have some great crisis occur – political, economic, or natural in origin – that would thrust upon us the necessity reordering our institutions. My father, who is in his nineties, often reminisces to me about the Great Depression. Although it was a miserable time, he speaks of it as the happiest time in his life. His best stories are about neighbours who loved and cared for each other, garden plots and favourite fishing holes. For any third way to work, a memory of the past will become very useful even if it sounds like literature. From a practical point of view, however, the kinds of knowledge that we will have to remember will include how to build a solid house, how to plant a vegetable garden, how to butcher a hog and how to craft a piece of furniture. In rural Tennessee where I live, there are people still around who know how to do these things, but they are a dying breed.

AS: The long (almost half-century) period of the Cold War has resulted in many social effects. The horrors of Communist regimes and the futility of state-planned economics, as well as the treason of western intellectuals who remained blind to the practice of Communist powers and eschewed ideas of idealized Communism, have aided the ideology of capitalism and consumerism. Capitalism came to be associated with ideas of freedom, free enterprise, freedom to choose and so on. How is this legacy burdening us in the current climate of economic globalization? Do you think recent crises and new social movements have the potential to shape a more critical view (and revision) of capitalism and especially its most ugly neo-liberal shape?

AWH: Here in America liberals want to hold on to their utopian visions of progress amidst the growing evidence that global capitalism is not delivering on its promises. Conservatives are very reluctant to criticize the downsides of capitalism, yet they are not really that different in their own visions of progress in comparison to liberals. It was amusing to hear the American politician Sarah Palin describe Pope Francis’ recent declarations against the “globalization of indifference” as being “a little liberal.” The Pope is liberal? While Democrats look to big government to save them, Republicans look to big business. Don’t they realize that with modern capitalism, big government and big business are joined at the hip? The British historian Hilarie Belloc recognized this over a century ago, when he wrote about the “servile state,” a condition where an unfree majority of non-owners work for the pleasure of a free minority of owners. But getting to your question, I do think more people are beginning to wake up to the problems associated with modern consumerist capitalism. A good example of this is a recent critique of capitalism written by Daniel M. Bell, Jr. entitled The Economy of Desire: Christianity and Capitalism in a Postmodern World. Here is a religious conservative who is saying the great tempter of our age is none other than Walmart. The absurdist philosopher and Nobel Prize winner Albert Camus once said the real passion of the twentieth century was not freedom, but servitude. Jacques Ellul, Camus’s contemporary, would have agreed with that assessment. Both believed that the United States and the Soviet Union, despite their Cold War differences, had one thing in common – the two powers had surrendered to the sovereignty of technology. Camus’ absurdism took a hard turn toward nihilism, while Ellul turned out to be a kind of cultural Jeremiah. It is interesting to me that when I talk to some people about third way ideas, which actually is an old way of thinking about economy, they tell me it can’t be done, that we are now beyond all that, and that the our economic trajectory is unstoppable or inevitable. This retort, I think, reveals how little freedom our system possesses. So, I can’t have a family farm? My small business can’t compete with the big guys? My wife has to work outside the home and I have to outsource the raising of my children? Who would have thought capitalism would lack this much freedom?

AS: And finally are you an optimist? Jacques Ellul seems to have been very pessimistic about us escaping from the iron cage of technological society. Do you think we can still break free?

AWH: I am both optimistic and pessimistic. In America, our rural areas are becoming increasingly depopulated. I see this as an opportunity for resettling the land – those large swaths of fields and forests that encompass about three quarters of our landmass. That is a very nice drawing board if we can figure out how to get back to it. I am also optimistic about the fact that more people are waking up to our troubling times. Other American writers that I would classify as third way proponents include Wendell Berry, Kirkpatrick Sale, Rod Dreher, Mark T. Mitchell, Bill Kauffman, Joseph Pearce and Allan Carlson. There is also a current within the American and British literary tradition, which has served as a critique of modernity. G.K. Chesterton, J.R.R. Tolkien, Dorothy Day and Allen Tate represent this sensibility, which is really a Catholic sensibility, although one does not have to be Catholic to have it. I am amazed at the popularity of novels about Amish people among American evangelical women. Even my wife reads them, and we are Presbyterians! In this country, the local food movement, the homeschool movement and the simplicity movement all seem to be pointing toward a kind of breaking away. You do not have to be Amish to break away from the cage of technological society; you only have to be deliberate and courageous. If we ever break out of the cage in the West, there will be two types of people who will lead such a movement. The first are religious people, both Catholic and Protestant, who will want to create a counter-environment for themselves and their children. The second are the old-school humanists, people who have a sense of history, an appreciation of the cultural achievements of the past, and the ability to see what is coming down the road. If Christians and humanists do nothing, and let modernity roll over them, I am afraid we face what C.S. Lewis called “the abolition of man”. Lewis believed our greatest danger was to have a technological elite – what he called The Conditioners – exert power over the vast majority so that our humanity is squeezed out of us. Of course all of this would be done in the name of progress, and most of us would willingly comply. The Conditioners are not acting on behalf of the public good or any other such ideal, rather what they want are guns, gold, and girls – power, profits and pleasure. The tragedy of all this, as Lewis pointed out, is that if they destroy us, they will destroy themselves, and in the end Nature will have the last laugh.

Link: Death Stares

By Facebook’s 10th anniversary in February 2014, the site claimed well over a billion active users. Embedded among those active accounts, however, are the profiles of the dead: nearly anyone with a Facebook account catches glimpses of digital ghosts, as dead friends’ accounts flicker past in the News Feed. As users of social media age, it is inevitable that interacting with the dead will become part of our everyday mediated encounters. Some estimates claim that 30 million Facebook profiles belong to dead users, at times making it hard to distinguish between the living and the dead online. While some profiles have been “memorialized,” meaning that they are essentially frozen in time and only searchable to Facebook friends, other accounts continue on as before.

In an infamous Canadian case, a young woman’s obituary photograph later appeared in a dating website’s advertising on Facebook. Her parents were rightly horrified by this breach of privacy, particularly because her suicide was prompted by cyberbullying following a gang rape. But digital images, once we put them out into the world on social networking platforms (or just on iPhones, as recent findings about the NSA make clear), are open to circulation, reproduction, and alteration. Digital images’ meanings can change just as easily as Snapchat photographs appear and fade. This seems less objectionable when the images being shared are of yesterday’s craft cocktail, but having images of funerals and corpses escape our control seems unpalatable.

While images of death and destruction routinely bombard us on 24-hour cable news networks, images of death may make us uncomfortable when they emerge from the private sphere, or are generated for semi-public viewing on social networking websites. As I check my Twitter feed while writing this essay, a gruesome image of a 3-year-old Palestinian girl murdered by Israeli troops has well over a thousand retweets, indicating that squeamishness about death does not extend to international news events. By contrast, when a mother of four posted photographs of her body post cancer treatments, mastectomy scars fully visible, she purportedly lost over one hundred of her Facebook friends who were put off by this display. To place carefully chosen images and text on a Facebook memorial page is one thing, but to post photographs of a deceased friend in her coffin or on her deathbed is quite another. For social media users accustomed to seeing stylized profiles, images of decay cut through the illusion of curation.

In a 2009 letter to the British Medical Journal a doctor commented on a man using a mobile phone to photograph a newly dead family member, pointing out with apparent distaste that Victorian postmortem portraits “were not candid shots of an unprepared still warm body.” He wonders, “Is the comparatively covert and instant nature of the mobile phone camera allowing people to respond to stress in a way that comforts them, but society may deem unacceptable and morbid?” While the horrified doctor saw a major discrepancy between Victorian postmortem photographs and the one his patient’s family member took, Victorian images were not always pristine. Signs of decay, illness, or struggle are visible in many of the photographs. Sickness or the act of dying, too, was depicted in these photos, not unlike the practices of deathbed tweeting and illness blogging. Even famous writersand artists were photographed on their deathbeds.

Photography has always been connected to death, both in theory and practice. For Roland Barthes, the photograph is That-has-been. To take a photo of oneself, to pose and press a button, is to declare one’s thereness while simultaneously hinting at your eventual death. The photograph is always “literally an emanation of the referent” and a process of mortification, of turning a subject into an object — a dead thing. Susan Sontag claimed that all photographs are memento mori, while Eduardo Cadava said that all photographs are farewells.

The perceived creepiness of postmortem photography has to do with the uncanniness of ambiguity: Is the photographed subject alive or dead? Painted eyes and artificially rosy cheeks, lifelike positions, and other additions made postmortem subjects seem more asleep than dead. Because of its ability to materialize and capture, photography both mortifies and reanimates its subjects. Not just photography, but other container technologies like phonographs and inscription tools can induce the same effects. Digital technology is another incarnation of these processes, as social networking profiles, email accounts, and blogs become new means of concretizing and preserving affective bonds. Online profiles and digital photographs share with postmortem photographs this uncanny quality of blurring the boundaries between life and death, animate and inanimate, or permanence and ephemerality.

Sharing postmortem photos or mourning selfies on social media platforms may seem creepy, but death photos were not always politely confined to such depersonalized sources as mass media. Postmortem and mourning photography were once accepted or even expected forms of bereavement, not callously dismissed as TMI. Victorians circulated images of dead loved ones on cabinet cards or cartes de visite, even if they could not reach as wide a public audience as those who now post on Instagram and Twitter. Photography historian Gregory Batchen notes that postmortem and mourning images were “displayed in parlors or living rooms or as part of everyday attire, these objects occupied a liminal space between public and private. They were, in other words, meant to do their work over and over again, and to be seen by both intimates and strangers.”

Victorian postmortem photography captured dead bodies in a variety of positions, including sleeping, sitting in a chair, lying in a coffin, or even standing with loved ones. Thousands of postmortem and mourning images from the 19th and early 20th centuries persist in archives and private collections, some of them bearing a striking resemblance to present day images. The Thanatos Archive in Woodinville, Washington, contains thousands of mourning and postmortem images from the 19th century. In one Civil War-era mourning photograph, a beautiful young women in white looks at the camera, not dissimilar to the images of the coiffed young women on Selfies at Funerals. In another image, a young woman in black holds a handkerchief to her face, an almost exaggerated gesture of mourning that the comically excessive pouting found in many funeral selfies recalls. In an earlier daguerreotype, a young woman in black holds two portraits of presumably deceased men.

Batchen describes Victorian mourners as people who “wanted to be remembered as remembering.” Many posed while holding photographs of dead loved ones or standing next to their coffins. Photographs from the 19th century feature women dressed in ornate mourning clothes, staring solemnly at photographs of dead loved ones. The photograph and braided ornaments made from hair of the deceased acted as metonymic devices, connecting the mourner in a physical way to the absent loved one, while ornate mourning wear, ritual, and the addition of paint or collage elements to mourning photographs left material traces of loss and remembrance.

Because photographs were time-consuming and expensive to produce in the Victorian era, middle-class families reserved portraits for special events. With the high rate of childhood mortality, families often had only one chance to photograph their children: as memento mori. Childhood mortality rates in the United States, while still higher than many other industrialized nations, are now significantly lower, meaning that images of dead children are startling. For those who do lose children today, however, the service Now I Lay Me Down to Sleep produces postmortem and deathbed photographs of terminally ill children.

Memorial photography is no mere morbid remnant of a Victorian past. Through his ethnographic fieldwork in rural Pennsylvania, anthropologist Jay Ruby uncovered a surprising amount of postmortem photography practices in the contemporary U.S. Because of the stigma associated with postmortem photography, however, most of his informants expressed their desire to keep such photographs private or even secret. Even if these practices continue, they have moved underground. Unlike the arduous photographic process of the 19th century, which could require living subjects to sit disciplined by metal rods to keep them from blurring in the finished image, smartphones and digital photography allow images to be taken quickly or even surreptitiously. Rather than calling on a professional photographer’s cumbersome equipment, grieving family members can use their own devices to secure the shadows of dead loved ones. While wearing jewelry made of human hair is less acceptable now (though people do make their loved ones into cremation diamonds), we may instead use digital avenues to leave material traces of mourning.

Why did these practices disappear from public view? In the 19th century, mourning and death were part of everyday life but by the middle of the 20th century, outward signs of grief were considered pathological and most middle-class Americans shied away from earlier practices, as numerous funeral industry experts and theorists have argued. Once families washed and prepared their loved ones’ bodies for burial; now care of the dead has been outsourced to corporatized funeral homes.

This is partly a result of attempts to deal with the catastrophic losses of the First and Second World Wars, when proper bereavement included separating oneself from the dead. Influenced by Freudian psychoanalysis’s categorization of grief as pathological, psychologists from the 1920s through the 1950s associated prolonged grief with mental instability, advising mourners to “get over” loss. Also, with the advent of antibiotics and vaccines for once common childhood killers like polio, the visibility of death in everyday life lessened. The changing economy and beginnings of post-Fordism contributed to these changes as well, as care work and other forms of affective labor moved from the domicile to commercial enterprises. Jessica Mitford’s influential 1963 book, The American Way of Death, traces the movement of death care from homes to local funeral parlors to national franchises, showing how funeral directors take advantage of grieving families by selling exorbitant coffins and other death accoutrements. Secularization is also a contributing factor, as elaborate death rituals faded from public life. While death and grief reentered the public discourse in the 1960s and 1970s, the medicalization of death and growth of nursing homes and hospice centers meant that many individuals only saw dead people as prepared and embalmed corpses at wakes and open casket funerals.

Despite this, reports of a general “death taboo” have been greatly exaggerated. Memorial traces are actually everywhere, prompting American Studies scholar Erika Doss to dub this the age of “memorial mania.” Various national traumas have led to numerous memorials, both online and physical, and likewise, on social media, including tactile examples like the AIDS memorial quilt, large physical structures like the 9/11 memorial, long-standing online entities like sitesremembering Columbine, and more recent localized memorials dedicated to the dead on social networking websites.

But these types of memorials did not immediately normalize washing, burying, or photographing the body of a loved one. There’s a disconnect between the shiny and seemingly disembodied memorials on social media platforms and the presence of the corpse, particularly one that has not been embalmed or prepared.

Some recent movements in the mortuary world call for acknowledgement of the body’s decay rather than relying on disembodied forms of memorialization and remembrance. Rather than outsourcing embalmment to a funeral home, proponents of green funerals from such organizations as the Order of the Good Death and the Death Salon call for direct engagement with the dead body, learning to care for and  even bury dead loved ones at home. The Order of the Good Death advises individuals to embrace death: “The Order is about making death a part of your life. That means committing to staring down your death fears — whether it be your own death, the death of those you love, the pain of dying, the afterlife (or lack thereof), grief, corpses, bodily decomposition, or all of the above. Accepting that death itself is natural, but the death anxiety and terror of modern culture are not.”

The practices having to do with “digital media” and death that some find unsettling — including placing QR codes on headstones, using social media websites as mourning platforms, snapping photos of dead relatives on smartphones, funeral selfies, and illness blogging or deathbed tweeting— may be seen as attempts to do just that, materializing death and mourning much like Victorian postmortem photography or mourning hair jewelry. Much has been made of the loss of indexicality with digital images, which replace this physical process of emanation with flattened information, but this development doesn’t obviate the relationship between photography and death. For those experiencing loss, the ability to materialize their mourning — even in digital forms — is comforting rather than macabre.

Link: Hell on Earth

At the University of Oxford, a team of scholars led by the philosopher Rebecca Roache has begun thinking about the ways futuristic technologies might transform punishment. In January, I spoke with Roache and her colleagues Anders Sandberg and Hannah Maslen about emotional enhancement, ‘supercrimes’, and the ethics of eternal damnation. What follows is a condensed and edited transcript of our conversation.

Suppose we develop the ability to radically expand the human lifespan, so that people are regularly living for more than 500 years. Would that allow judges to fit punishments to crimes more precisely?

Roache: When I began researching this topic, I was thinking a lot about Daniel Pelka, a four-year-old boy who was starved and beaten to death [in 2012] by his mother and stepfather here in the UK. I had wondered whether the best way to achieve justice in cases like that was to prolong death as long as possible. Some crimes are so bad they require a really long period of punishment, and a lot of people seem to get out of that punishment by dying. And so I thought, why not make prison sentences for particularly odious criminals worse by extending their lives?

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

The life-extension scenario may sound futuristic, but if you look closely you can already see it in action, as people begin to live longer lives than before. If you look at the enormous prison population in the US, you find an astronomical number of elderly prisoners, including quite a few with pacemakers. When I went digging around in medical journals, I found all these interesting papers about the treatment of pacemaker patients in prison.

Suppose prisons become more humane in the future, so that they resemble Norwegian prisons instead of those you see in America or North Korea. Is it possible that correctional facilities could become truly correctional in the age of long lifespans, by taking a more sustained approach to rehabilitation?

Roache: If people could live for centuries or millennia, you would obviously have more time to reform them, but you would also run into a tricky philosophical issue having to do with personal identity. A lot of philosophers who have written about personal identity wonder whether identity can be sustained over an extremely long lifespan. Even if your body makes it to 1,000 years, the thinking goes, that body is actually inhabited by a succession of persons over time rather than a single continuous person. And so, if you put someone in prison for a crime they committed at 40, they might, strictly speaking, be an entirely different person at 940. And that means you are effectively punishing one person for a crime committed by someone else. Most of us would think that unjust.

Let’s say that life expansion therapies become a normal part of the human condition, so that it’s not just elites who have access to them, it’s everyone. At what point would it become unethical to withhold these therapies from prisoners?

Roache: In that situation it would probably be inappropriate to view them as an enhancement, or something extra. If these therapies were truly universal, it’s more likely that people would come to think of them as life-saving technologies. And if you withheld them from prisoners in that scenario, you would effectively be denying them medical treatment, and today we consider that inhumane. My personal suspicion is that once life extension becomes more or less universal, people will begin to see it as a positive right, like health care in most industrialised nations today. Indeed, it’s interesting to note that in the US, prisoners sometimes receive better health care than uninsured people. You have to wonder about the incentives a system like that creates.

Where is that threshold of universality, where access to something becomes a positive right? Do we have an empirical example of it?

Roache: One interesting case might be internet access. In Finland, for instance, access to communication technology is considered a human right and handwritten letters are not sufficient to satisfy it. Finnish prisons are required to give inmates access to computers, although their internet activity is closely monitored. This is an interesting development because, for years, limiting access to computers was a common condition of probation in hacking cases – and that meant all kinds of computers, including ATMs [cash points]. In the 1980s, that lifestyle might have been possible, and you could also see pulling it off in the ’90s, though it would have been very difficult. But today computers are ubiquitous, and a normal life seems impossible without them; you can’t even access the subway without interacting with a computer of some sort.

In the late 1990s, an American hacker named Kevin Mitnick was denied all access to communication technology after law enforcement officials [in California] claimed he could ‘start a nuclear war by whistling into a pay phone’. But in the end, he got the ruling overturned by arguing that it prevented him from living a normal life.

What about life expansion that meddles with a person’s perception of time? Take someone convicted of a heinous crime, like the torture and murder of a child. Would it be unethical to tinker with the brain so that this person experiences a 1,000-year jail sentence in his or her mind?

Roache: There are a number of psychoactive drugs that distort people’s sense of time, so you could imagine developing a pill or a liquid that made someone feel like they were serving a 1,000-year sentence. Of course, there is a widely held view that any amount of tinkering with a person’s brain is unacceptably invasive. But you might not need to interfere with the brain directly. There is a long history of using the prison environment itself to affect prisoners’ subjective experience. During the Spanish Civil War [in the 1930s] there was actually a prison where modern art was used to make the environment aesthetically unpleasant. Also, prison cells themselves have been designed to make them more claustrophobic, and some prison beds are specifically made to be uncomfortable.

I haven’t found any specific cases of time dilation being used in prisons, but time distortion is a technique that is sometimes used in interrogation, where people are exposed to constant light, or unusual light fluctuations, so that they can’t tell what time of day it is. But in that case it’s not being used as a punishment, per se, it’s being used to break people’s sense of reality so that they become more dependent on the interrogator, and more pliable as a result. In that sense, a time-slowing pill would be a pretty radical innovation in the history of penal technology.

I want to ask you a question that has some crossover with theological debates about hell. Suppose we eventually learn to put off death indefinitely, and that we extend this treatment to prisoners. Is there any crime that would justify eternal imprisonment? Take Hitler as a test case. Say the Soviets had gotten to the bunker before he killed himself, and say capital punishment was out of the question – would we have put him behind bars forever?

Roache: It’s tough to say. If you start out with the premise that a punishment should be proportional to the crime, it’s difficult to think of a crime that could justify eternal imprisonment. You could imagine giving Hitler one term of life imprisonment for every person killed in the Second World War. That would make for quite a long sentence, but it would still be finite. The endangerment of mankind as a whole might qualify as a sufficiently serious crime to warrant it. As you know, a great deal of the research we do here at the Oxford Martin School concerns existential risk. Suppose there was some physics experiment that stood a decent chance of generating a black hole that could destroy the planet and all future generations. If someone deliberately set up an experiment like that, I could see that being the kind of supercrime that would justify an eternal sentence.

In your forthcoming paper on this subject, you mention the possibility that convicts with a neurologically stunted capacity for empathy might one day be ‘emotionally enhanced’, and that the remorse felt by these newly empathetic criminals could be the toughest form of punishment around. Do you think a full moral reckoning with an awful crime the most potent form of suffering an individual can endure?

Roache: I’m not sure. Obviously, it’s an empirical question as to which feels worse, genuine remorse or time in prison. There is certainly reason to take the claim seriously. For instance, in literature and folk wisdom, you often hear people saying things like, ‘The worst thing is I’ll have to live with myself.’ My own intuition is that for very serious crimes, genuine remorse could be subjectively worse than a prison sentence. But I doubt that’s the case for less serious crimes, where remorse isn’t even necessarily appropriate – like if you are wailing and beating yourself up for stealing a candy bar or something like that.

I remember watching a movie in school, about a teen that killed another teen in a drunk-driving accident. As one of the conditions of his probation, the judge in the case required him to mail a daily cheque for 25 cents to the parents of the teen he’d killed for a period of 10 years. Two years in, the teen was begging the judge to throw him in jail, just to avoid the daily reminder.

Roache: That’s an interesting case where prison is actually an escape from remorse, which is strange because one of the justifications for prison is that it’s supposed to focus your mind on what you have done wrong. Presumably, every day you wake up in prison, you ask yourself why you are there, right?

What if these emotional enhancements proved too effective? Suppose they are so powerful, they turn psychopaths into Zen masters who live in a constant state of deep, reflective contentment. Should that trouble us? Is mental suffering a necessary component of imprisonment?

Roache: There is a long-standing philosophical question as to how bad the prison experience should be. Retributivists, those who think the point of prisons is to punish, tend to think that it should be quite unpleasant, whereas consequentialists tend to be more concerned with a prison’s reformative effects, and its larger social costs. There are a number of prisons that offer prisoners constructive activities to participate in, including sports leagues, art classes, and even yoga. That practice seems to reflect the view that confinement, or the deprivation of liberty, is itself enough of a punishment. Of course, even for consequentialists, there has to be some level of suffering involved in punishment, because consequentialists are very concerned about deterrence.

I wanted to close by moving beyond imprisonment, to ask you about the future of punishment more broadly. Are there any alternative punishments that technology might enable, and that you can see on the horizon now? What surprising things might we see down the line?

Roache: We have been thinking a lot about surveillance and punishment lately. Already, we see governments using ankle bracelets to track people in various ways, and many of them are fairly elaborate. For instance, some of these devices allow you to commute to work, but they also give you a curfew and keep a close eye on your location. You can imagine this being refined further, so that your ankle bracelet bans you from entering establishments that sell alcohol. This could be used to punish people who happen to like going to pubs, or it could be used to reform severe alcoholics. Either way, technologies of this sort seem to be edging up to a level of behaviour control that makes some people uneasy, due to questions about personal autonomy.

It’s one thing to lose your personal liberty as a result of being confined in a prison, but you are still allowed to believe whatever you want while you are in there. In the UK, for instance, you cannot withhold religious manuscripts from a prisoner unless you have a very good reason. These concerns about autonomy become particularly potent when you start talking about brain implants that could potentially control behaviour directly. The classic example is Robert G Heath [a psychiatrist at Tulane University in New Orleans], who did this famously creepy experiment [in the 1950s] using electrodes in the brain in an attempt to modify behaviour in people who were prone to violent psychosis. The electrodes were ostensibly being used to treat the patients, but he was also, rather gleefully, trying to move them in a socially approved direction. You can really see that in his infamous [1972] paper on ‘curing’ homosexuals. I think most Western societies would say ‘no thanks’ to that kind of punishment.

To me, these questions about technology are interesting because they force us to rethink the truisms we currently hold about punishment. When we ask ourselves whether it’s inhumane to inflict a certain technology on someone, we have to make sure it’s not just the unfamiliarity that spooks us. And more importantly, we have to ask ourselves whether punishments like imprisonment are only considered humane because they are familiar, because we’ve all grown up in a world where imprisonment is what happens to people who commit crimes. Is it really OK to lock someone up for the best part of the only life they will ever have, or might it be more humane to tinker with their brains and set them free? When we ask that question, the goal isn’t simply to imagine a bunch of futuristic punishments – the goal is to look at today’s punishments through the lens of the future.

Link: Neil Postman: Informing Ourselves to Death

The following speech was given at a meeting of the German Informatics Society (Gesellschaft fuer Informatik) on October 11, 1990 in Stuttgart, Germany.

The great English playwright and social philosopher George Bernard Shaw once remarked that all professions are conspiracies against the common folk. He meant that those who belong to elite trades—physicians, lawyers, teachers, and scientists—protect their special status by creating vocabularies that are incomprehensible to the general public.  This process prevents outsiders from understanding what the profession is doing and why—and protects the insiders from close examination and criticism. Professions, in other words, build forbidding walls of technical gobbledegook over which the prying and alien eye cannot see.

Unlike George Bernard Shaw, I raise no complaint against this, for I consider myself a professional teacher and appreciate technical gobbledegook as much as anyone. But I do not object if occasionally someone who does not know the secrets of my trade is allowed entry to the inner halls to express an untutored point of view. Such a person may sometimes give a refreshing opinion or, even better, see something in a way that the professionals have overlooked.

I believe I have been invited to speak at this conference for justsuch a purpose. I do not know very much more about computer technology than the average person—which isn’t very much. I have little understanding of what excites a computer programmer or scientist, and in examining the descriptions of the presentations at this conference, I found each one more mysterious than the next. So, I clearly qualify as an outsider.

But I think that what you want here is not merely an outsider but an outsider who has a point of view that might be useful to the insiders. And that is why I accepted the invitation to speak. I believe I know something about what technologies do to culture, and I know even more about what technologies undo in a culture. In fact, I might say, at the start, that what a technology undoes is a subject that computer experts apparently know very little about. I have heard many experts in computer technology speak about the advantages that computers will bring. With one exception - namely, Joseph Weizenbaum—I have never heard anyone speak seriously and comprehensively about the disadvantages of computer technology, which strikes me as odd, and makes me wonder if the profession is hiding something important. That is to say, what seems to be lacking among computer experts is a sense of technological modesty.

After all, anyone who has studied the history of technology knows that technological change is always a Faustian bargain: Technology giveth and technology taketh away, and not always in equal measure. A new technology sometimes creates more than it destroys. Sometimes, it destroys more than it creates.  But it is never one-sided.

The invention of the printing press is an excellent example.  Printing fostered the modern idea of individuality but it destroyed the medieval sense of community and social integration. Printing created prose but made poetry into an exotic and elitist form of expression. Printing made modern science possible but transformed religious sensibility into an exercise in superstition. Printing assisted in the growth of the nation-state but, in so doing, made patriotism into a sordid if not a murderous emotion.

In the case of computer technology, there can be no disputing that the computer has increased the power of large-scale organizations like military establishments or airline companies or banks or tax collecting agencies. And it is equally clear that the computer is now indispensable to high-level researchers in physics and other natural sciences. But to what extent has computer technology been an advantage to the masses of people? To steel workers, vegetable store owners, teachers, automobile mechanics, musicians, bakers, brick layers, dentists and most of the rest into whose lives the computer now intrudes? These people have had their private matters made more accessible to powerful institutions. They are more easily tracked and controlled; they are subjected to more examinations, and are increasingly mystified by the decisions made about them. They are more often reduced to mere numerical objects. They are being buried by junk mail. They are easy targets for advertising agencies and political organizations. The schools teach their children to operate computerized systems instead of teaching things that are more valuable to children. In a word, almost nothing happens to the losers that they need, which is why they are losers.

It is to be expected that the winners—for example, most of the speakers at this conference—will encourage the losers to be enthusiastic about computer technology. That is the way of winners, and so they sometimes tell the losers that with personal computers the average person can balance a checkbook more neatly, keep better track of recipes, and make more logical shopping lists. They also tell them that they can vote at home, shop at home, get all the information they wish at home, and thus make community life unnecessary. They tell them that their lives will be conducted more efficiently, discreetly neglecting to say from whose point of view or what might be the costs of such efficiency.

Should the losers grow skeptical, the winners dazzle them with the wondrous feats of computers, many of which have only marginal relevance to the quality of the losers’ lives but which are nonetheless impressive. Eventually, the losers succumb, in part because they believe that the specialized knowledge of the masters of a computer technology is a form of wisdom. The masters, of course, come to believe this as well.  The result is that certain questions do not arise, such as, to whom will the computer give greater power and freedom, and whose power and freedom will be reduced?

Now, I have perhaps made all of this sound like a well-planned conspiracy, as if the winners know all too well what is being won and what lost. But this is not quite how it happens, for the winners do not always know what they are doing, and where it will all lead. The Benedictine monks who invented the mechanical clock in the 12th and 13th centuries believed that such a clock would provide a precise regularity to the seven periods of devotion they were required to observe during the course of the day.  As a matter of fact, it did. But what the monks did not realize is that the clock is not merely a means of keeping track of the hours but also of synchronizing and controlling the actions of men. And so, by the middle of the 14th century, the clock had moved outside the walls of the monastery, and brought a new and precise regularity to the life of the workman and the merchant. The mechanical clock made possible the idea of regular production, regular working hours, and a standardized product. Without the clock, capitalism would have been quite impossible. And so, here is a great paradox: the clock was invented by men who wanted to devote themselves more rigorously to God; and it ended as the technology of greatest use to men who wished to devote themselves to the accumulation of money. Technology always has unforeseen consequences, and it is not always clear, at the beginning, who or what will win, and who or what will lose.

I might add, by way of another historical example, that Johann Gutenberg was by all accounts a devoted Christian who would have been horrified to hear Martin Luther, the accursed heretic, declare that printing is “God’s highest act of grace, whereby the business of the Gospel is driven forward.” Gutenberg thought his invention would advance the cause of the Holy Roman See, whereas in fact, it turned out to bring a revolution which destroyed the monopoly of the Church.

We may well ask ourselves, then, is there something that the masters of computer technology think they are doing for us which they and we may have reason to regret? I believe there is, and it is suggested by the title of my talk, “Informing Ourselves to Death”. In the time remaining, I will try to explain what is dangerous about the computer, and why. And I trust you will be open enough to consider what I have to say. Now, I think I can begin to get at this by telling you of a small experiment I have been conducting, on and off, for the past several years. There are some people who describe the experiment as an exercise in deceit and exploitation but I will rely on your sense of humor to pull me through.

Here’s how it works: It is best done in the morning when I see a colleague who appears not to be in possession of a copy of {The New York Times}. “Did you read The Times this morning?,” I ask. If the colleague says yes, there is no experiment that day. But if the answer is no, the experiment can proceed. “You ought to look at Page 23,” I say. “There’s a fascinating article about a study done at Harvard University.”  “Really? What’s it about?” is the usual reply. My choices at this point are limited only by my imagination. But I might say something like this: “Well, they did this study to find out what foods are best to eat for losing weight, and it turns out that a normal diet supplemented by chocolate eclairs, eaten six times a day, is the best approach. It seems that there’s some special nutrient in the eclairs—encomial dioxin—that actually uses up calories at an incredible rate.”

Another possibility, which I like to use with colleagues who are known to be health conscious is this one: “I think you’ll want to know about this,” I say. “The neuro-physiologists at the University of Stuttgart have uncovered a connection between jogging and reduced intelligence. They tested more than 1200 people over a period of five years, and found that as the number of hours people jogged increased, there was a corresponding decrease in their intelligence. They don’t know exactly why but there it is.”

I’m sure, by now, you understand what my role is in the experiment: to report something that is quite ridiculous—one might say, beyond belief. Let me tell you, then, some of my results: Unless this is the second or third time I’ve tried this on the same person, most people will believe or at least not disbelieve what I have told them. Sometimes they say: “Really? Is that possible?” Sometimes they do a double-take, and reply, “Where’d you say that study was done?” And sometimes they say, “You know, I’ve heard something like that.”

Now, there are several conclusions that might be drawn from these results, one of which was expressed by H. L. Mencken fifty years ago when he said, there is no idea so stupid that you can’t find a professor who will believe it. This is more of an accusation than an explanation but in any case I have tried this experiment on non-professors and get roughly the same results. Another possible conclusion is one expressed by George Orwell—also about 50 years ago—when he remarked that the average person today is about as naive as was the average person in the Middle Ages. In the Middle Ages people believed in the authority of their religion, no matter what. Today, we believe in the authority of our science, no matter what.

But I think there is still another and more important conclusion to be drawn, related to Orwell’s point but rather off at a right angle to it. I am referring to the fact that the world in which we live is very nearly incomprehensible to most of us. There is almost no fact—whether actual or imagined—that will surprise us for very long, since we have no comprehensive and consistent picture of the world which would make the fact appear as an unacceptable contradiction. We believe because there is no reason not to believe. No social, political, historical, metaphysical, logical or spiritual reason. We live in a world that, for the most part, makes no sense to us. Not even technical sense. I don’t mean to try my experiment on this audience, especially after having told you about it, but if I informed you that the seats you are presently occupying were actually made by a special process which uses the skin of a Bismark herring, on what grounds would you dispute me? For all you know—indeed, for all I know—the skin of a Bismark herring could have made the seats on which you sit. And if I could get an industrial chemist to confirm this fact by describing some incomprehensible process by which it was done, you would probably tell someone tomorrow that you spent the evening sitting on a Bismark herring.

Perhaps I can get a bit closer to the point I wish to make with an analogy: If you opened a brand-new deck of cards, and started turning the cards over, one by one, you would have a pretty good idea of what their order is. After you had gone from the ace of spades through the nine of spades, you would expect a ten of spades to come up next. And if a three of diamonds showed up instead, you would be surprised and wonder what kind of deck of cards this is. But if I gave you a deck that had been shuffled twenty times, and then asked you to turn the cards over, you would not expect any card in particulara three of diamonds would be just as likely as a ten of spades. Having no basis for assuming a given order, you would have no reason to react with disbelief or even surprise to whatever card turns up.

The point is that, in a world without spiritual or intellectual order, nothing is unbelievable; nothing is predictable, and therefore, nothing comes as a particular surprise.

In fact, George Orwell was more than a little unfair to the average person in the Middle Ages. The belief system of the Middle Ages was rather like my brand-new deck of cards. There existed an ordered, comprehensible world-view, beginning with the idea that all knowledge and goodness come from God. What the priests had to say about the world was derived from the logic of their theology. There was nothing arbitrary about the things people were asked to believe, including the fact that the world itself was created at 9 AM on October 23 in the year 4004 B. C. That could be explained, and was, quite lucidly, to the satisfaction of anyone. So could the fact that 10,000 angels could dance on the head of a pin. It made quite good sense, if you believed that the Bible is the revealed word of God and that the universe is populated with angels. The medieval world was, to be sure, mysterious and filled with wonder, but it was not without a sense of order. Ordinary men and women might not clearly grasp how the harsh realities of their lives fit into the grand and benevolent design, but they had no doubt that there was such a design, and their priests were well able, by deduction from a handful of principles, to make it, if not rational, at least coherent.

The situation we are presently in is much different. And I should say, sadder and more confusing and certainly more mysterious. It is rather like the shuffled deck of cards I referred to. There is no consistent, integrated conception of the world which serves as the foundation on which our edifice of belief rests. And therefore, in a sense, we are more naive than those of the Middle Ages, and more frightened, for we can be made to believe almost anything. The skin of a Bismark herring makes about as much sense as a vinyl alloy or encomial dioxin.

Now, in a way, none of this is our fault. If I may turn the wisdom of Cassius on its head: the fault is not in ourselves but almost literally in the stars. When Galileo turned his telescope toward the heavens, and allowed Kepler to look as well, they found no enchantment or authorization in the stars, only geometric patterns and equations. God, it seemed, was less of a moral philosopher than a master mathematician. This discovery helped to give impetus to the development of physics but did nothing but harm to theology. Before Galileo and Kepler, it was possible to believe that the Earth was the stable center of the universe, and that God took a special interest in our affairs. Afterward, the Earth became a lonely wanderer in an obscure galaxy in a hidden corner of the universe, and we were left to wonder if God had any interest in us at all. The ordered, comprehensible world of the Middle Ages began to unravel because people no longer saw in the stars the face of a friend.

And something else, which once was our friend, turned against us, as well. I refer to information. There was a time when information was a resource that helped human beings to solve specific and urgent problems of their environment. It is true enough that in the Middle Ages, there was a scarcity of information but its very scarcity made it both important and usable. This began to change, as everyone knows, in the late 15th century when a goldsmith named Gutenberg, from Mainz, converted an old wine press into a printing machine, and in so doing, created what we now call an information explosion. Forty years after the invention of the press, there were printing machines in 110 cities in six different countries; 50 years after, more than eight million books had been printed, almost all of them filled with information that had previously not been available to the average person. Nothing could be more misleading than the idea that computer technology introduced the age of information. The printing press began that age, and we have not been free of it since.

But what started out as a liberating stream has turned into a deluge of chaos. If I may take my own country as an example, here is what we are faced with: In America, there are 260,000 billboards; 11,520 newspapers; 11,556 periodicals; 27,000 video outlets for renting tapes; 362 million tv sets; and over 400 million radios. There are 40,000 new book titles published every year (300,000 world-wide) and every day in America 41 million photographs are taken, and just for the record, over 60 billion pieces of advertising junk mail come into our mail boxes every year. Everything from telegraphy and photography in the 19th century to the silicon chip in the twentieth has amplified the din of information, until matters have reached such proportions today that for the average person, information no longer has any relation to the solution of problems.

The tie between information and action has been severed. Information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one’s status. It comes indiscriminately, directed at no one in particular, disconnected from usefulness; we are glutted with information, drowning in information, have no control over it, don’t know what to do with it.

And there are two reasons we do not know what to do with it. First, as I have said, we no longer have a coherent conception of ourselves, and our universe, and our relation to one another and our world. We no longer know, as the Middle Ages did, where we come from, and where we are going, or why. That is, we don’t know what information is relevant, and what information is irrelevant to our lives. Second, we have directed all of our energies and intelligence to inventing machinery that does nothing but increase the supply of information. As a consequence, our defenses against information glut have broken down; our information immune system is inoperable. We don’t know how to filter it out; we don’t know how to reduce it; we don’t know to use it. We suffer from a kind of cultural AIDS.

Link: Rural > City > Cyberspace

A series of psychological studies over the past 20 years has revealed that after spending time in a quiet rural setting, close to nature, people exhibit greater attentiveness, stronger memory, and generally improved cognition. Their brains become both calmer and sharper. The reason, according to attention restoration theory, or ART, is that when people aren’t being bombarded by external stimuli, their brains can, in effect, relax. They no longer have to tax their working memories by processing a stream of bottom-up distractions. The resulting state of contemplativeness strengthens their ability to control their mind.

The results of the most recent such study were published in Psychological Science at the end of 2008. A team of University of Michigan researchers, led by psychologist Marc Berman, recruited some three dozen people and subjected them to a rigorous and mentally fatiguing series of tests designed to measure the capacity of their working memory and their ability to exert top-down control over their attention. The subjects were divided into two groups. Half of them spent about an hour walking through a secluded woodland park, and the other half spent an equal amount of time walking along busy downtown streets. Both groups then took the tests a second time. Spending time in the park, the researchers found, “significantly improved” people’s performance on the cognitive tests, indicating a substantial increase in attentiveness. Walking in the city, by contrast, led to no improvement in test results.

The researchers then conducted a similar experiment with another set of people. Rather than taking walks between the rounds of testing, these subjects simply looked at photographs of either calm rural scenes or busy urban ones. The results were the same. The people who looked at pictures of nature scenes were able to exert substantially stronger control over their attention, while those who looked at city scenes showed no improvement in their attentiveness. “In sum,” concluded the researchers, “simple and brief interactions with nature can produce marked increases in cognitive control.” Spending time in the natural world seems to be of “vital importance” to “effective cognitive functioning.”

There is no Sleepy Hollow on the internet, no peaceful spot where contemplativeness can work its restorative magic. There is only the endless, mesmerizing buzz of the urban street. The stimulations of the web, like those of the city, can be invigorating and inspiring. We wouldn’t want to give them up. But they are, as well, exhausting and distracting. They can easily, as Hawthorne understood, overwhelm all quieter modes of thought. One of the greatest dangers we face as we automate the work of our minds, as we cede control over the flow of our thoughts and memories to a powerful electronic system, is the one that informs the fears of both the scientist Joseph Weizenbaum and the artist Richard Foreman: a slow erosion of our humanness and our humanity.

It’s not only deep thinking that requires a calm, attentive mind. It’s also empathy and compassion. Psychologists have long studied how people experience fear and react to physical threats, but it’s only recently that they’ve begun researching the sources of our nobler instincts. What they’re finding is that, as Antonio Damasio, the director of USC’s Brain and Creativity Institute, explains, the higher emotions emerge from neural processes that “are inherently slow.” In one recent experiment, Damasio and his colleagues had subjects listen to stories describing people experiencing physical or psychological pain. The subjects were then put into a magnetic resonance imaging machine and their brains were scanned as they were asked to remember the stories. The experiment revealed that while the human brain reacts very quickly to demonstrations of physical pain – when you see someone injured, the primitive pain centers in your own brain activate almost instantaneously – the more sophisticated mental process of empathizing with psychological suffering unfolds much more slowly. It takes time, the researchers discovered, for the brain “to transcend immediate involvement of the body” and begin to understand and to feel “the psychological and moral dimensions of a situation.”

The experiment, say the scholars, indicates that the more distracted we become, the less able we are to experience the subtlest, most distinctively human forms of empathy, compassion, and other emotions. “For some kinds of thoughts, especially moral decision-making about other people’s social and psychological situations, we need to allow for adequate time and reflection,” cautions Mary Helen Immordino-Yang, a member of the research team. “If things are happening too fast, you may not ever fully experience emotions about other people’s psychological states.” It would be rash to jump to the conclusion that the internet is undermining our moral sense. It would not be rash to suggest that as the net reroutes our vital paths and diminishes our capacity for contemplation, it is altering the depth of our emotions as well as our thoughts.

There are those who are heartened by the ease with which our minds are adapting to the web’s intellectual ethic. “Technological progress does not reverse,” writes aWall Street Journal columnist, “so the trend toward multitasking and consuming many different types of information will only continue.” We need not worry, though, because our “human software” will in time “catch up to the machine technology that made the information abundance possible.” We’ll “evolve” to become more agile consumers of data. The writer of a cover story in New Yorkmagazine says that as we become used to “the 21st-century task” of “fitting” among bits of online information, “the wiring of the brain will inevitably change to deal more efficiently with more information.” We may lose our capacity “to concentrate on a complex task from beginning to end,” but in recompense we’ll gain new skills, such as the ability to “conduct 34 conversations simultaneously across six different media.” A prominent economist writes, cheerily, that “the web allows us to borrow cognitive strengths from autism and to be better infovores.” An Atlantic author suggests that our “technology-induced ADD” may be “a short-term problem,” stemming from our reliance on “cognitive habits evolved and perfected in an era of limited information flow.” Developing new cognitive habits is “the only viable approach to navigating the age of constant connectivity.”

These writers are certainly correct in arguing that we’re being molded by our new information environment. Our mental adaptability, built into the deepest workings of our brains, is a keynote of intellectual history. But if there’s comfort in their reassurances, it’s of a very cold sort. Adaptation leaves us better suited to our circumstances, but qualitatively it’s a neutral process. What matters in the end is not our becoming but what we become. In the 1950s, Martin Heidegger observed that the looming “tide of technological revolution” could “so captivate, bewitch, dazzle, and beguile man that calculative thinking may someday come to be accepted and practiced as the only way of thinking.” Our ability to engage in “meditative thinking,” which he saw as the very essence of our humanity, might become a victim of headlong progress. The tumultuous advance of technology could, like the arrival of the locomotive at the Concord station, drown out the refined perceptions, thoughts, and emotions that arise only through contemplation and reflection. The “frenziedness of technology,” Heidegger wrote, threatens to “entrench itself everywhere.”

It may be that we are now entering the final stage of that entrenchment. We are welcoming the frenziedness into our souls.

Link: Forever Alone: Why Loneliness Matters in the Social Age

I got up and went over and looked out the window. I felt so lonesome, all of a sudden. I almost wished I was dead. Boy, did I feel rotten. I felt so damn lonesome. I just didn’t want to hang around any more. It made me too sad and lonesome.

— J.D. Salinger in Catcher in the Rye

Loneliness was a problem I experienced most poignantly in college. In the three years I spent at Carnegie Mellon, the crippling effects of loneliness slowly pecked away at my enthusiasm for learning and for life, until I was drowning in an endless depressive haze that never completely cleared until I left Pittsburgh.

It wasn’t for lack of trying either. At the warm behest of the orientation counselors, I joined just the right number of clubs, participated in most of the dorm activities, and tried to expand my social portfolio as much as possible.

None of it worked.

To the extent that I sought out CAPS (our student psych and counseling service) for help, the platitudes they offered as advice (“Just put yourself out there!”) only served to confirm my suspicion that loneliness isn’t a very visible problem. (After all, the cure for loneliness isn’t exactly something that could be prescribed. “Have you considered transferring?” they finally suggested, after exhausting their list of thought-terminating clichés. I graduated early instead.)

As prolonged loneliness took its toll, I became very unhappy—to put it lightly—and even in retrospect I have difficulty pinpointing a specific cause. It wasn’t that I didn’t know anyone or failed to make any friends, and it wasn’t that I was alonemore than I liked.

Sure, I could point my finger at the abysmally fickle weather patterns of Pittsburgh, or the pseudo-suburban bubble that envelops the campus. There might even be a correlation between my academic dissonance with computer science and my feelings of loneliness. I might also just be an extremely unlikable person.

For whatever the reason (or a confluence thereof) the reality remained that I struggled with loneliness throughout my time in college.

+++

I recall a conversation with my friend Dev one particular evening on the patio of our dormitory. It was the beginning of my junior and last year at CMU, and I had just finished throwing an ice cream party for the residents I oversaw as an RA.

“Glad to be back?” he asked as he plopped down on a lawn chair beside me.

“No, not really.”

The sun was setting, and any good feelings about the upcoming semester with it. We made small talk about the school in general, as he had recently transferred, but eventually Dev asked me if I was happy there.

“No, not really.”

“Why do you think you’re so miserable here?”

“I don’t know. A lot of things, I guess. But mostly because I feel lonely. Like I don’t belong, like I can’t relate to or connect with anyone on an emotional level. I haven’t made any quality relationships here that I would look back on with any fond memories. Fuck… I don’t know what to do.”

College, at least for me, was a harrowing exercise in how helplessly debilitating, hopelessly soul-crushing, and at times life-threatening loneliness could be. It’s a problem nobody talks about, and it’s been a subject of much personal relevance and interest.

Loneliness as a Health Problem

A recent article published on Slate outlines the hidden dangers of social isolation. Chronic loneliness, as Jessica Olien discovered, poses serious health risks that not only impact mental health but physiological well-being as well.

The lack of quality social relationships in a person’s life has been linked to an increased mortality risk comparable to smoking and alcohol consumption and exceeds the influence of other risk factors like physical inactivity and obesity. It’s hard to brush off loneliness as a character flaw or an ephemeral feeling when you realize it kills more people than obesity.

Research also shows that loneliness diminishes sleep quality and impairs physiological function, in some cases reducing immune function and boosting inflammation, which increases risk for diabetes and heart disease.

Why hasn’t loneliness gotten much attention as a medical problem? Olien shares the following observation:

As a culture we obsess over strategies to prevent obesity. We provide resources to help people quit smoking. But I have never had a doctor ask me how much meaningful social interaction I am getting. Even if a doctor did ask, it is not as though there is a prescription for meaningful social interaction.

As a society we look down upon those who admit to being lonely, we cast and ostracize them with labels like “loners” insofar as they prefer to hide behind shame and doubt rather than speak up. This dynamic only makes it harder to devise solutions to what is clearly a larger societal issue, and it certainly brings to question the effects of culture on our perception of loneliness as a problem.

Loneliness as a Culture Problem

Stephen Fry, in a blog post titled Only the Lonely which explains his suicide attempt last year, describes in detail his struggle with depression. His account offers a rare and candid glimpse into the reality of loneliness with which those afflicted often hide from the public:

Lonely? I get invitation cards through the post almost every day. I shall be in the Royal Box at Wimbledon and I have serious and generous offers from friends asking me to join them in the South of France, Italy, Sicily, South Africa, British Columbia and America this summer. I have two months to start a book before I go off to Broadway for a run of Twelfth Night there.

I can read back that last sentence and see that, bipolar or not, if I’m under treatment and not actually depressed, what the fuck right do I have to be lonely, unhappy or forlorn? I don’t have the right. But there again I don’t have the right not to have those feelings. Feelings are not something to which one does or does not have rights.

In the end loneliness is the most terrible and contradictory of my problems.

In the United States, approximately 60 million people, or 20% of the population, feel lonely. According to the General Social Survey, between 1985 and 2004, the number of people with whom the average American discusses important mattersdecreased from three to two, and the number with no one to discuss important matters with tripled.

Modernization has been cited as a reason for the intensification of loneliness in every society around the world, attributed to greater migration, smaller household sizes, and a larger degree of media consumption.

In Japan, loneliness is an even more pervasive, layered problem mired in cultural parochialisms. Gideon Lewis-Kraus pens a beautiful narrative on Harper’s in which he describes his foray into the world of Japanese co-sleeping cafés:

“Why do you think he came here, to the sleeping café?”

“He wanted five-second hug maybe because he had no one to hug. Japan ishaji culture. Shame. Is shame culture. Or maybe also is shyness. I don’t know why. Tokyo people … very alone. And he does not have … ” She thought for a second, shrugged, reached for her phone. “Please hold moment.”

She held it close to her face, multitouched the screen not with thumb and forefinger but with tiny forefinger and middle finger. I could hear another customer whispering in Japanese in the silk-walled cubicle at our feet. His co-sleeper laughed loudly, then laughed softly. Yukiko tapped a button and shone the phone at my face. The screen said COURAGE.

It took an enormous effort for me to come to terms with my losing battle with loneliness and the ensuing depression at CMU, and an even greater leap of faith to reach out for help. (That it was to no avail is another story altogether.) But what is even more disconcerting to me is that the general stigma against loneliness and mental health issues, hinging on an unhealthy stress culture, makes it hard for afflicted students to seek assistance at all.

As Olien puts it, “In a society that judges you based on how expansive your social networks appear, loneliness is difficult to fess up to. It feels shameful.”

To truly combat loneliness from a cultural angle, we need to start by examining our own fears about being alone and to recognize that as humans, loneliness is often symptomatic of our unfulfilled social needs. Most importantly, we need to accept that it’s okay to feel lonely. Fry, signing off on his heartfelt post, offers this insight:

Loneliness is not much written about (my spell-check wanted me to say that loveliness is not much written about—how wrong that is) but humankind is a social species and maybe it’s something we should think about more than we do.

Loneliness as a Technology Problem

Technology, and by extension media consumption in the Internet age, adds the most perplexing (and perhaps the most interesting) dimension to the loneliness problem. As it turns out, technology isn’t necessarily helping us feel more connected; in some cases, it makes loneliness worse.

The amount of time you spend on Facebook, as a recent study found, is inversely related to how happy you feel throughout the day.

Take a moment to watch this video.

It’s a powerful, sombering reminder that our growing dependence on technology to communicate has serious social repercussions, to which Cohen presents his central thesis:

We are lonely, but we’re afraid of intimacy, while the social networks offer us three gratifying fantasies: 1) That we can put our attention wherever we want it to be. 2) That we will always be heard. 3) That we will never have to be alone.

And that third idea, that we will never have to be alone, is central to changing our psyches. It’s shaping a new way of being. The best way to describe it is:

I share, therefore I am.

Public discourse on the cultural ramifications of technology is certainly not a recent development, and the general sentiment that our perverse obsession with sharing will be humanity’s downfall continues to echo in various forms around the web: articles proclaiming that Instagram is ruining people’s lives, the existence of a section on Reddit called cringepics where people congregate to ridicule things others post on the Internet, the increasing number of self-proclaimed “social media gurus” on Twitter, to name a few.

The signs seem to suggest we have reached a tipping point for “social” media that’s not very social on a personal level, but whether it means a catastrophic implosion or a gradual return to more authentic forms of interpersonal communications remains to be seen.

While technology has been a source of social isolation for many, it has the capacity to alleviate loneliness as well. A study funded by the online dating site eHarmony shows that couples who met online are less likely to divorce and achieve more marital satisfaction than those who met in real life.

The same model could potentially be applied to friendships, and it’s frustrating to see that there aren’t more startups leveraging this opportunity when the problem is so immediate and in need of solutions. It’s a matter of exposure and education on the truths of loneliness, and unfortunately we’re just not there yet.

+++

The perils of loneliness shouldn’t be overlooked in an increasingly hyperconnected world that often tells another story through rose-tinted lenses. Rather, the gravity of loneliness should be addressed and brought to light as a multifaceted problem, one often muted and stigmatized in our society. I learned firsthand how painfully real of a problem loneliness could be, and more should be done to spread its awareness and to help those affected.

“What do you think I should do?” I looked at Dev as the last traces of sunlight teetered over the top of Morewood Gardens. It was a rhetorical question—things weren’t about to get better.

“Find better people,” he replied.

I offered him a weak smile in return, but little did I know then how prescient those words were.

In the year that followed, I started a fraternity with some of the best kids I’d come to know (Dev included), graduated college and moved to San Francisco, made some of the best friends I’ve ever had, and never looked back, if only to remember, and remember well, that it’s never easy being lonely.

Link: "We Need to Talk About TED"

This is my rant against TED, placebo politics, “innovation,” middlebrow megachurch infotainment, etc., given atTEDx San Diego at their invitation (thank you to Jack Abbott and Felena Hanson). It’s very difficult to do anything interesting within the format, and even this seems like far too much of a ‘TED talk’, especially to me. In California R&D World, TED (and TED-ism) is unfortunately a key forum for how people communicate with one another. It’s weird, inadequate and symptomatic, to be sure, but it is one of ‘our’ key public squares, however degraded and captured. Obviously any sane intellectual wouldn’t go near it. Perhaps that’s why I was (am) curious about what (if any) reverberation my very minor heresy might have: probably nothing, and at worse an alibi and vaccine for TED to warn off the malaise that stalks them? We’ll have to see. The text of the talk is below, and was also published as an Op-Ed by The Guardian

In our culture, talking about the future is sometimes a polite way of saying things about the present that would otherwise be rude or risky.

But have you ever wondered why so little of the future promised in TED talks actually happens? So much potential and enthusiasm, and so little actual change. Are the ideas wrong? Or is the idea about what ideas can do all by themselves wrong?

I write about entanglements of technology and culture, how technologies enable the making of certain worlds, and at the same time how culture structures how those technologies will evolve, this way or that. It’s where philosophy and design intersect.

So the conceptualization of possibilities is something that I take very seriously. That’s why I, and many people, think it’s way passed time to take a step back and ask some serious questions about the intellectual viability of things like TED.

So my TED talk is not about my work or my new book—the usual spiel—but about TED itself, what it is and why it doesn’t work.

The first reason is over-simplification.

To be clear, I think that having smart people who do very smart things explain what they doing in a way that everyone can understand is a good thing. But TED goes way beyond that.

Let me tell you a story. I was at a presentation that a friend, an Astrophysicist, gave to a potential donor. I thought the presentation was lucid and compelling (and I’m a Professor of Visual Arts here at UC San Diego so at the end of the day, I know really nothing about Astrophysics). After the talk the sponsor said to him, “you know what, I’m gonna pass because I just don’t feel inspired… you should be more like Malcolm Gladwell.”

At this point I kind of lost it. Can you imagine?

Think about it: an actual scientist who produces actual knowledge should be more like a journalist who recycles fake insights! This is beyond popularization. This is taking something with value and substance  and coring it out so that it can be swallowed without chewing. This is not the solution to our most frightening problems—rather this is one of our most frightening problems.

So I ask the question: does TED epitomize a situation in which a scientist (or an artist or philosopher or activist or whoever) is told that their work is not worthy of support, because the public doesn’t feel good listening to them?

I submit that Astrophysics run on the model of American Idol is a recipe for civilizational disaster.

What is TED?

So what is TED exactly?

Perhaps it’s the proposition that if we talk about world-changing ideas enough, then the world will change.  But this is not true, and that’s the second problem.

TED of course stands for Technology, Entertainment, Design, and I’ll talk a bit about all three. I Think TED actually stands for: middlebrow megachurch infotainment

The key rhetorical device for TED talks is a combination of epiphany and personal testimony (an “epiphimony” if you like ) through which the speaker shares a personal journey of insight and realization, its triumphs and tribulations.

What is it that the TED audience hopes to get from this? A vicarious insight, a fleeting moment of wonder, an inkling that maybe it’s all going to work out after all? A spiritual buzz?

I’m sorry but this fails to meet the challenges that we are supposedly here to confront. These are  complicated and difficult and are not given to tidy just-so solutions. They don’t care about anyone’s experience of optimism. Given the stakes, making our best and brightest waste their time –and the audience’s time— dancing like infomercial hosts is too high a price. It is cynical.

Also, it just doesn’t work.

Recently there was a bit of a dust up when TED Global sent out a note to TEDx organizers asking them not to not book speakers whose work spans the paranormal, the conspiratorial, New Age “quantum neuroenergy,” etc: what is called Woo. Instead of these placebos, TEDx should instead curate talks that are imaginative but grounded in reality.  In fairness, they took some heat, so their gesture should be acknowledged. A lot of people take TED very seriously, and might lend credence to specious ideas if stamped with TED credentials. “No” to placebo science and medicine.

But…the corollaries of placebo science and placebo medicine are placebo politics and placebo innovation. On this point, TED has a long ways to go.

Perhaps the pinnacle of placebo politics and innovation was featured at TEDx San Diego in 2011. You’re familiar I assume with Kony2012, the social media campaign to stop war crimes in central Africa? So what happened here? Evangelical surfer Bro goes to help kids in Africa. He makes a campy video explaining genocide to the cast of Glee. The world finds his public epiphany to be shallow to the point of self-delusion. The complex geopolitics of Central Africa are left undisturbed. Kony’s still there. The end.

You see, when inspiration becomes manipulation, inspiration becomes obfuscation. If you are not cynical you should be skeptical. You should be as skeptical of placebo politics as you are placebo medicine.

T and Technology

T - E - D. I’ll go through them each quickly.

So first Technology…

We hear that not only is change accelerating but that the pace of change is accelerating as well.

While this is true of computational carrying-capacity at a planetary level, at the same time—and in fact the two are connected—we are also in a moment of cultural de-acceleration.

We invest our energy in futuristic information technologies, including our cars, but drive them home to kitsch architecture copied from the 18th century. The future on offer is one in which everything changes, so long as everything stays the same. We’ll have Google Glass, but still also business casual.

This timidity is our path to the future? No, this is incredibly conservative, and there is no reason to think that more Gigaflops will inoculate us.

Because, if a problem is in fact endemic to a system, then the exponential effects of Moore’s Law also serve to amplify what’s broken. It is more computation along the wrong curve, and I don’t think this is necessarily a triumph of reason.

Part of my work explores deep technocultural shifts, from post-humanism to the post-anthropocene, but TED’s version has too much faith in technology, and not nearly enough commitment to technology. It is placebo technoradicalism, toying with risk so as to re-affirm the comfortable.

So our machines get smarter and we get stupider. But it doesn’t have to be like that. Both can be much more intelligent. Another futurism is possible.

E and Economics

A better ‘E’ in TED would stand for Economics, and the need for, yes imagining and designing, different systems of valuation, exchange, accounting of transaction externalities, financing of coordinated planning, etc. Because States plus Markets, States versus Markets, these are insufficient models, and our conversation is stuck in Cold War gear.

Worse is when economics is debated like metaphysics, as if the reality of a system is merely a bad example of the ideal.

Communism in theory is an egalitarian utopia.

Actually existing Communism meant ecological devastation, government spying, crappy cars and gulags.

Capitalism in theory is rocket ships, nanomedicine, and Bono saving Africa.

Actually existing Capitalism means Walmart jobs, McMansions, people living in the sewers under Las Vegas, Ryan Seacrest…plus —ecological devastation, government spying, crappy public transportation and for-profit prisons.

Our options for change range from basically what we have plus a little more Hayek, to what we have plus a little more Keynes. Why?

The most  recent centuries have seen extraordinary accomplishments in improving quality of life. The paradox is that the system we have now —whatever you want to call it— is in the short term what makes the amazing new technologies possible, but in the long run it is also what suppresses their full flowering.  Another economic architecture is prerequisite.

D and Design

Instead of our designers prototyping the same “change agent for good” projects over and over again, and then wondering why they don’t get implemented at scale, perhaps we should resolve that design is not some magic answer. Design matters a lot, but for very different reasons.  It’s easy to get enthusiastic about design because, like talking about the future, it is more polite than referring to white elephants in the room..

Such as…

Phones, drones and genomes, that’s what we do here in San Diego and La Jolla. In addition to the other  insanely great things these technologies do, they are the basis of NSA spying, flying robots killing people, and the wholesale privatization of  biological life itself. That’s also what we do.

The potential for these technologies are both wonderful and horrifying at the same time, and to make them serve good futures, design as “innovation” just isn’t a strong enough idea by itself. We need to talk more about design as “immunization,” actively preventing certain potential “innovations” that we do not want from happening.

And so…

As for one simple take away… I don’t have one simple take away, one magic idea. That’s kind of the point. I will say that if and when the key problems facing our species were to be solved, then perhaps many of us in this room would be out of work (and perhaps in jail).

But it’s not as though there is a shortage of topics for serious discussion. We need a deeper conversation about the difference between digital cosmopolitanism and Cloud Feudalism (and toward that, a queer history of computer science and Alan Turing’s birthday as holiday!)

I would like new maps of the world, ones not based on settler colonialism, legacy genomes and bronze age myths, but instead on something more… scalable.

TED today is not that.

Problems are not “puzzles” to be solved. That metaphor assumes that all the necessary pieces are already on the table, they just need to be re-arranged and re-programmed. It’s not true.

“Innovation” defined as moving the pieces around and adding more processing power is not some Big Idea that will disrupt a broken status quo: that precisely is the broken status quo.

One TED speaker said recently, “If you remove this boundary, …the only boundary left is our imagination.” Wrong.

If we really want transformation, we have to slog through the hard stuff (history, economics, philosophy, art, ambiguities, contradictions).  Bracketing it off to the side to focus just on technology, or just on innovation, actually prevents transformation.

Instead of dumbing-down the future, we need to raise the level of general understanding to the level of complexity of the systems in which we are embedded and which are embedded in us. This is not about “personal stories of inspiration,” it’s about the difficult and uncertain work of de-mystification and re-conceptualization: the hard stuff that really changes how we think. More Copernicus, less Tony Robbins.

At a societal level, the bottom line is if we invest things that make us feel good but which don’t work, and don’t invest things that don’t make us feel good but which may solve problems, then our fate is that it will just get harder to feel good about not solving problems.

In this case the placebo is worse than ineffective, it’s harmful. It’s diverts your interest, enthusiasm and outrage until it’s absorbed into this black hole of affectation.

Keep calm and carry on “innovating”… is that the real message of TED? To me that’s not inspirational, it’s cynical.

In the U.S. the right-wing has certain media channels that allow it to bracket reality… other constituencies have TED.