Sunshine Recorder

Link: We Can Be Heroes

By associating itself with the selfish pleasures of visibility, the wearable camera GoPro has staved off criticism for how it enhances surveillance.

In On Photography, Susan Sontag laments the disconnected voyeurism photography produces. “Photographs are a way of imprisoning reality,” she declared. Watching people stare at their phones rather than the world around them suggests that she was spot on. Capitalizing on that widely held impression, the thriving camera company GoPro sells a different view: Cameras don’t have to imprison reality; they can encourage you to engage with the world as fully as possible — all while documenting it, of course.

Since the advent of photography, we have craved cameras that let us capture our adventures and experiences without interfering in them, letting us seek the best images to become our memories. Even when cameras weighed 10 pounds, they were still marketed for their mobility and durability. GoPro, as an ultralight panoramic HD camera designed to be worn rather than pointed and operated, is in some ways the logical culmination of this desire. Its definitive feature is the ease with which it can be strapped on or mounted to surfboards, dashboards or foreheads to permit constant and thought-free filming.

But as the means for sharing images has expanded along with the means for capturing them, the expectations we have of photography have shifted. Photos no longer merely document future memories; they define present lifestyles. They circulate and establish personal identity. Engagement with an experience and documenting it are no longer competing impulses, if they ever were — instead they are simultaneous and mutually constitutive.

We navigate the world not only in first person, observing ourselves observe, but online as well, as part of networks. The platforms we view media through and who we are connected with on them are as important as what we see to making meaning from it. Cameras like GoPro are accommodating that shift, balancing the first-person view with the voyeuristic. The way GoPro situates us in the world as always at the center but always amid an implied crowd suits the massively multiplayer online game that life has become.

To understand GoPro’s success — Forbes noted that GoPro made up 21.5% of the U.S. digital-camcorder market in 2013 and that it has doubled its sales or more every year since launching in 2004 — we have to look at the photographic ecosystem from which it sprang. In 2012 Facebook users were uploading more than 300 million photos every day, and YouTube claims that every minute, 100 hours of video footage is uploaded to its site. Pew found that the number of people posting videos online doubled from 2009 to 2013, with 35% of them posting images in hopes of going viral.

The more this volume of online images increases, the more they must compete against one another for attention. To stand out among the flood of images pouring out from our screens, images increasingly must have clear social stakes. When every locale is photographed and shared a countless number of times, who is in the frame, who else has seen it, and what it suggests about their unique experience begins to matter as much as the image itself.

GoPro is designed with this shift in mind. The company’s founder, Nick Woodman, recognized that consumers were buying fewer point-and-shoot cameras not only because smartphones had made them redundant but because ubiquitous connectivity also made sharing images as important as taking them. Social readiness has become paramount, and GoPro’s emphasis on a panoramic first-person image addresses that need, filling a niche neglected by ordinary smartphones, which still put a discrete frame around experience before mediating it. GoPro’s streaming self-documentation, on the other hand, suits what Nathan Jurgenson has called the Facebook Eye, the perspective on experience that sees things in terms of their social-media shareability first, and not as an afterthought. It proposes that we can best enjoy our media saturation and self-documentation when we live our “real lives” to the fullest, in a networked world where those real lives include social media. When the camera falls into the background, we can both document ourselves constantly and be fully present adventurers.

GoPro generates images that can circulate as avatars of the person who took them, foregrounding the photographer over the photograph or the frame and emphasizing the photographers’ actions — though not necessarily their aesthetics. Images taken with GoPro always reference GoPro, much as Instagram images reference themselves through their distinctive filters. The self-referencing in GoPro videos establish them as a particular genre, one whose formal properties cue audiences to pay attention to who took them without being distracted by any idiosyncratic artistic sensibility. GoPro documents performances more than environments, physical skill rather than scene. No matter what you are filming, GoPro interpolates you as the subject.

Since the camera’s wearable design promotes passive documentation, it presumably pushes users toward pursuing more interesting activities to document, even if it’s likely a largely aspirational purchase for most later adopters. As Nick Paumgarten argued in a recent New Yorker article, GoPro is essentially a lifestyle company more than a camera company. It relies on early adopters to live up to its marketing promises, at least enough to convince the larger market of nonextreme consumers that it’s possible that we too could “be a hero” and “go Pro.” Their exploits make GoPro seem an opportune investment for the once-a-year vacation surfer who wants to ensure that the evidence of their own occasional daring will stand out. It’s a consumer-aggrandizing ad approach perfected by the likes of Mountain Dew and Monster Energy. Only in GoPro’s case, the product actually creates the marketing materials.

But for GoPro to sustain its meteoric rise, the company cannot remain relegated to extreme sports for long. To continue to grow the company will have to try to expand the meaning of heroism. The cameras won’t stay on surfboards and mountain bikes for long. The company is already featuring family footageconcerts, and more on YouTube, pushing its lenses into the everyday. The founder has filmed the birth of his baby with a GoPro strapped to his head.

This expansion of GoPro into ordinary rather than extreme life may help the company’s growth potential, but it also threatens the exemption it has so far secured in the debates raging over visibility and surveillance. As GoPro tries to increase our desire to document, highlighting the self-defining pleasures of being seen, revelations about rampant online surveillance quells it. We are increasingly unsure of when we are being documented, by whom, and for what purpose, and adding more cameras to the public sphere threatens to make this worse. The NSA leaks have revealed that the most pervasive intrusions of our privacy were not from new spy tools but rather through simple access to the data we passively generated online and on digital devices. We spied on ourselves. And as the furor over Google “Glassholes” suggests, it is still easier to become outraged over other people filming us than at our own inadvertent collusion with the surveillance regime.

While Glass, which overtly trains an intrusive camera on others, remains the subject of scorn; GoPro largely escapes this by keeping the focus on capturing the perspective and extreme adventures of its owner. Browsing through GoPro videos, one rarely sees images of bystanders, the most popular videos are shot from a surfboard, inside a racecar, from a plane, with the mount focusing the lens squarely on the subject. With its “Be a Hero” tagline, GoPro relies on rugged adventurers and extreme sports to abate our privacy concerns, as if their daring would carry over and disinhibit other potential users. The new heroes of the social-surveillance age are those who would dare to be watched at all times.

Link: Preemptive Personalization

Nicholas Carr’s forthcoming The Glass Cage, about the ethical dangers of automation, inspired me to read George Orwell’s The Road to Wigan Pier (1937), which contains a lengthy tirade against the notion of progress as efficiency and convenience. Orwell declares that “the tendency of mechanical progress is to make life safe and soft.” It assumes that a human being is “a kind of walking stomach” that is interested only in passive pleasure rather than work: “whichever way you turn there will be some machine cutting you off from the chance of working — that is, of living.” Convenience is social control, and work, for Orwell at least, is the struggle to experience a singular life. But the human addiction to machine-driven innovation and automation, he predicts, fueled apparently by a fiendish inertia that demands progress for progress’s sake, will inevitably lead to total disempowerment and dematerialization:

There is really no reason why a human being should do more than eat, drink, sleep, breathe, and procreate; everything else could be done for him by machinery. Therefore the logical end of mechanical progress is to reduce the human being to something resembling a brain in a bottle.

Basically, he sees the Singularity coming and he despises it as a “frightful subhuman depth of softness and helplessness.” And there is no opting-out:

In a healthy world there would be no demand for tinned foods, aspirins, gramophones, gaspipe chairs, machine guns, daily newspapers, telephones, motor-cars, etc., etc.; and on the other hand there would be a constant demand for the things the machine cannot produce. But meanwhile the machine is here, and its corrupting effects are almost irresistible. One inveighs against it, but one goes on using it.

This “brain in the bottle” vision of our automated future, Orwell surmises, is why people of the 1930s were wary of socialism, which he regards as being intimately connected ideologically with the theme of inevitable progress. That connection has of course been severed; socialism tends to be linked with nostalgia and tech’s “thought leaders” tend to champion libertarianism and cut-throat competitive practices abetted by technologically induced asymmetries, all in the name of “innovation” and “disruption.”

Oddly, Orwell argues that the profit motive is an impediment to technological development:

Given a mechanical civilization the process of invention and improvement will always continue, but the tendency of capitalism is to slow it down, because under capitalism any invention which does not promise fairly immediate profits is neglected; some, indeed, which threaten to reduce profits are suppressed almost as ruthlessly as the flexible glass mentioned by Petronius … Establish Socialism—remove the profit principle—and the inventor will have a free hand. The mechanization of the world, already rapid enough, would be or at any rate could be enormously accelerated.

Orwell seems to imagine a world with a fixed amount of needs, which technology will allow to be fulfilled at the expense of less labor; he imagines technology will make useful things more durable rather than making the utility we seek more ephemeral. But technology, as directed by the profit motive, makes obsolescence into a form of innovation; it generates new wants and structures disposability as convenience rather than waste. Why maintain and repair something when you can throw it away and shop for a replacement — especially when shopping is accepted to be a fun leisure activity?

While Orwell is somewhat extreme in his romanticizing of hard work — he sounds downright reactionary in his contempt for “laziness,” and can’t conceive of something as banal as shopping as a rewarding, self-defining effort for anyone — people today seem anything but wary about technological convenience, even though it is always paired with intensified surveillance. (The bathetic coverage of Apple’s marketing events seems to reflect an almost desperate enthusiasm for whatever “magical” new efficiencies the company will offer.) Socialism would be far more popular if people really thought it was about making life easier.

Orwell associated automation with socialism’s utopian dreams, and thought the flabbiness of those dreams would drive people to fascism. Looking back, it seems more plausible to argue that automation has become a kind of gilded fascism that justifies itself and its barbarities with the efficiencies machines enable. Though we sometimes still complain about machines deskilling us, we have nonetheless embraced once unimaginable forms of automation, permitting it to be extended into how we form a conception of ourselves, how we come to want anything at all.

One might make this case for automation’s insidious infiltration into our lives: First, technology deskilled work, making us machine monitors rather than craft workers; then it deskilled consumption, prompting us to prefer “tinned food” to some presumably more organic alternative. Now, with the tools of data collection and algorithmic processing, it deskills self-reflection and the formation of desire. We get preemptive personalization, as when sites like Facebook and Google customize your results without your input. “Personalization” gets stretched to the point where it leaves out the will of the actual person involved. How convenient! So glad that designers and engineers are making it easier for me to want things without having to make the effort of actually thinking to want them. Desire is hard.

Preemptive personalization is seductive only because of the pressure we experience to make our identities unique — to win the game of having a self by being “more original” than other people. That goal stems in part from the social media battlefield, which itself reflects a neoliberal emphasis on entrepreneurializing the self, regarding oneself as leading an enterprise, not living a life. If “becoming yourself” was ever a countercultural goal, it isn’t anymore. (That’s why Gap can build an ad campaign around the proposition “Dress Normal.” Trying to be distinctive has lost its distinction.) It’s mandatory that we have a robust self to express, that we create value by innovating on that front. Otherwise we run the risk of becoming economic leftovers.

Yet becoming “more unique” is an impossible, nonsensical goal for self-actualization: self-knowledge probably involves coming to terms with how generic our wants and needs and thoughts are, and how dependent they are on the social groups within which we come to know ourselves, as opposed to some procedure of uncovering their pure idiosyncrasy. The idea that self-becoming or self-knowledge is something we’d want to make more “convenient” seems counterproductive. The effort to be a self is its own end. That is what Orwell seemed to think: “The tendency of mechanical progress, then, is to frustrate the human need for effort and creation.”

But since Orwell’s time, the mechanization process has increasingly become a mediatization/digitization process that can be rationalized as an expansion of humans’ ability to create and express themselves. Technological development has emphasized customization and personalization, allowing us to use consumer goods as language above and beyond their mere functionality. (I’ll take my iWatch in matte gray please!) Social media are the farthest iteration of this, a personalized infosphere in which our interaction shapes the reality we see and our voice can directly reach potentially vast audiences.

But this seeming expansion of our capacity to express ourselves in in the service of data-capture and surveillance; we embed ourselves in communication platforms that allow our expression to be used to curtail our horizons. Preemptive personalization operates under the presumption we are eager to express ourselves only so that we may be done with the trouble of it once and for all, once what we would or should say can be automated and we can simply reap the social benefits of our automatic speech.

Social media trap us in a tautological loop, in which we express ourselves to be ourselves to express ourselves, trying to claim better attention shares from the people we are ostensibly “connecting” with. Once we are trying to “win” the game of selfhood on the scoreboard of attention, any pretense of expressing an “inner truth” (which probably doesn’t exist anyway) about ourselves becomes lost in the rush to churn out and absorb content. It doesn’t matter what we say, or if we came up with it, when all that matters is the level of response. In this system, we don’t express our true self in search of attention and confirmation; instead attention posits the true self as a node in a dynamic network, and the more connections that run through it, the more complete and “expressed” that self is.

When we start measure the self, concretely, in quantified attention and the density of network connectivity rather than in terms of the nebulous concept of “effort,” it begins to make sense to accept algorithmic personalization, which reports the self to us as something we can consume. The algorithm takes the data and spits out a statistically unique self for us, that lets us consume our uniqueness as as a kind of one-of-a-kind delicacy. It masks from us the way our direct relations with other people shape who are, preserving the fantasy we are sui generis. It protects us not only from the work of being somebody — all that tiring self-generated desire — but more insidiously from the emotion work of acknowledging and respecting the ways our actions have consequences for other people at very fundamental levels of their being. Automated selfhood frees us from recognizing and coping with our interdependency, outsourcing it to an algorithm.

The point of “being unique” has broadened; it is a consumer pleasure as well as a pseudo-accomplishment of self-actualization. So all at once, “uniqueness” (1) motivates content production for social-media platforms, (2) excuses intensified surveillance, and (3) allows filter bubbles to be imposed as a kind of flattery (which ultimately isolates us and prevents self-knowledge, or knowledge of our social relations). Uniqueness is as much a mechanism of control as an apparent expression of our distinctiveness. No wonder it’s been automated.

Link: The New Luddites

Very few of us can be sure that our jobs will not, in the near future, be done by machines. We know about cars built by robots, cashpoints replacing bank tellers, ticket dispensers replacing train staff, self-service checkouts replacing supermarket staff, tele­phone operators replaced by “call trees”, and so on. But this is small stuff compared with what might happen next.

Nursing may be done by robots, delivery men replaced by drones, GPs replaced by artificially “intelligent” diagnosers and health-sensing skin patches, back-room grunt work in law offices done by clerical automatons and remote teaching conducted by computers. In fact, it is quite hard to think of a job that cannot be partly or fully automated. And technology is a classless wrecking ball – the old blue-collar jobs have been disappearing for years; now they are being followed by white-collar ones.

Ah, you may say, but human beings will always be better. This misses the point. It does not matter whether the new machines never achieve full human-like consciousness, or even real intelligence, they can almost certainly achieve just enough to do your job – not as well as you, perhaps, but much, much more cheaply. To modernise John Ruskin, “There is hardly anything in the world that some robot cannot make a little worse and sell a little cheaper, and the people who consider price only are this robot’s lawful prey.”

Inevitably, there will be social and political friction. The onset has been signalled by skirmishes such as the London Underground strikes over ticket-office staff redundancies caused by machine-readable Oyster cards, and by the rage of licensed taxi drivers at the arrival of online unlicensed car booking services such as Uber, Lyft and Sidecar.

This resentment is intensified by rising social inequality. Everybody now knows that neoliberalism did not deliver the promised “trickle-down” effect; rather, it delivered trickle-up, because, even since the recession began, almost all the fruits of growth have gone to the rich. Working- and middle-class incomes have flatlined or fallen. Now, it seems, the wealthy cyber-elites are creating machines to put the rest of us out of work entirely.

The effect of this is to undermine the central argument of those who hype the benefits of job replacement by machines. They say that new and better jobs will be created. They say this was always true in the past, so it will be true now. (This is the precise correlative of the neoliberals’ “rising tide floats all boats” argument.) But people now doubt the “new and better jobs” line trotted out – or barked – by the prophets of robotisation. The new jobs, if there are any, will more probably be serf-like attenders to the needs of the machine, burger-flippers to the robot classes.

Nevertheless, this future, too, is being sold in neoliberal terms. “I am sure,” wrote Mitch Free (sic) in a commentary for Forbes on 11 June, “it is really hard [to] see when your pay check is being directly impacted but the reality to any market disruption is that the market wants the new technology or business model more than they want what you offer, otherwise it would not get off the ground. The market always wins, you cannot stop it.”

Free was writing in response to what probably seemed to him a completely absurd development, a nightmarish impossibility – the return of Luddism. “Luddite” has, in the past few decades, been such a routine term of abuse for anybody questioning the march of the machines (I get it all the time) that most people assume that, like “fool”, “idiot” or “prat”, it can only ever be abusive. But, in truth, Luddism has always been proudly embraced by the few and, thanks to the present climate of machine mania and stagnating incomes, it is beginning to make a new kind of sense. From the angry Parisian taxi drivers who vandalised a car belonging to an Uber driver to a Luddite-sympathetic column by the Nobel laureate Paul Krugman in theNew York Times, Luddism in practice and in theory is back on the streets.

Luddism derives its name from Ned Ludd, who is said to have smashed two “stocking frames” – knitting machines – in a fit of rage in 1779, but who may have been a fictional character. It became a movement, with Ludd as its Robin Hood, between 1811 and 1817 when English textile workers were threatened with unemployment by new technology, which the Luddites defined as “machinery hurtful to Commonality”. Mills were burned, machinery was smashed and the army was mobilised. At one time, according to Eric Hobsbawm, there were more soldiers fighting the Luddites than were fighting Napoleon in Spain. Parliament passed a bill making machine-smashing a capital offence, a move opposed by Byron, who wrote a song so seditious that it was not published until after his death: “… we/Will diefighting, or live free,/And down with all kings but King Ludd!”

Once the Luddites had been suppressed, the Industrial Revolution resumed its course and, over the ensuing two centuries, proved the most effective wealth-creating force ever devised by man. So it is easy to say the authorities were on the right side of history and the Luddites on the wrong one. But note that this is based on the assumption that individual sacrifice in the present – in the form of lost jobs and crafts – is necessary for the mechanised future. Even if this were true, there is a dangerous whiff of totalitarianism in the assumption.

Neo-Luddism began to emerge in the postwar period. First, the power of nuclear weapons made it clear to everybody that our machines could now put everybody out of work for ever by the simple expedient of killing them and, second, in the 1980s and 1990s it became apparent that new computer technologies had the power to change our lives completely.

Thomas Pynchon, in a brilliant essay for the New York Times in 1984 – he noted the resonance of the year – responded to the first new threat and, through literature, revitalised the idea of the machine as enemy. “So, in the science fiction of the Atomic Age and the cold war, we see the Luddite impulse to deny the machine taking a different direction. The hardware angle got de-emphasised in favour of more humanistic concerns – exotic cultural evolutions and social scenarios, paradoxes and games with space/time, wild philosophical questions – most of it sharing, as the critical literature has amply discussed, a definition of ‘human’ as particularly distinguished from ‘machine’.”

In 1992, Neil Postman, in his book Technopoly, rehabilitated the Luddites in response to the threat from computers: “The term ‘Luddite’ has come to mean an almost childish and certainly naive opposition to technology. But the historical Luddites were neither childish nor naive. They were people trying desperately to preserve whatever rights, privileges, laws and customs had given them justice in the older world-view.”

Underpinning such thoughts was the fear that there was a malign convergence – perhaps even a conspiracy – at work. In 1961, even President Eisenhower warned of the anti-democratic power of the “military-industrial complex”. In 1967 Lewis Mumford spoke presciently of the possibility of a “mega-machine” that would result from “the convergence of science, technics and political power”. Pynchon picked up the theme: “If our world survives, the next great challenge to watch out for will come – you heard it here first – when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy.”

The possibility is with us still in Silicon Valley’s earnest faith in the Singularity – the moment, possibly to come in 2045, when we build our last machine, a super-intelligent computer that will solve all our problems and enslave or kill or save us. Such things are true only to the extent to which they are believed – and, in the Valley, this is believed, widely.

Environmentalists were obvious allies of neo-Luddism – adding global warming as a third threat to the list – and globalism, with its tendency to destroy distinctively local and cherished ways of life, was an obvious enemy. In recent decades, writers such as Chellis Glendinning, Langdon Winner and Jerry Mander have elevated the entire package into a comprehensive rhetoric of dissent from the direction in which the world is going. Winner wrote of Luddism as an “epistemological technology”. He added: “The method of carefully and deliberately dismantling technologies, epistemological Luddism, if you will, is one way of recovering the buried substance upon which our civilisation rests. Once unearthed, that substance could again be scrutinised, criticised, and judged.”

It was all very exciting, but then another academic rained on all their parades. His name was Ted Kaczynski, although he is more widely known as the Unabomber. In the name of his own brand of neo-Luddism, Kaczynski’s bombs killed three people and injured many more in a campaign that ran from 1978-95. His 1995 manifesto, “Industrial Society and Its Future”, said: “The Industrial Revolution and its consequences have been a disaster for the human race,” and called for a global revolution against the conformity imposed by technology.

The lesson of the Unabomber was that radical dissent can become a form of psychosis and, in doing so, undermine the dissenters’ legitimate arguments. It is an old lesson and it is seldom learned. The British Dark Mountain Project (dark-mountain.net), for instance, is “a network of writers, artists and thinkers who have stopped believing the stories our civilisation tells itself”. They advocate “uncivilisation” in writing and art – an attempt “to stand outside the human bubble and see us as we are: highly evolved apes with an array of talents and abilities which we are unleashing without sufficient thought, control, compassion or intelligence”. This may be true, but uncivilising ourselves to express this truth threatens to create many more corpses than ever dreamed of by even the Unabomber.

Obviously, if neo-Luddism is conceived of in psychotic or apocalyptic terms, it is of no use to anybody and could prove very dangerous. But if it is conceived of as a critical engagement with technology, it could be useful and essential. So far, this critical engagement has been limited for two reasons. First, there is the belief – it is actually a superstition – in progress as an inevitable and benign outcome of free-market economics. Second, there is the extraordinary power of the technology companies to hypnotise us with their gadgets. Since 1997 the first belief has found justification in a management theory that bizarrely, upon closer examination, turns out to be the mirror image of Luddism. That was the year in which Clayton Christensen published The Innovator’s Dilemma, judged by the Economist to be one of the most important business books ever written. Christensen launched the craze for “disruption”. Many other books followed and many management courses were infected. Jill Lepore reported in the New Yorkerin June that “this fall, the University of Southern California is opening a new program: ‘The degree is in disruption,’ the university announced.” And back at Forbes it is announced with glee that we have gone beyond disruptive innovation into a new phase of “devastating innovation”.

It is all, as Lepore shows in her article, nonsense. Christensen’s idea was simply that innovation by established companies to satisfy customers would be undermined by the disruptive innovation of market newcomers. It was a new version of Henry Ford and Steve Jobs’s view that it was pointless asking customers what they want; the point was to show them what they wanted. It was nonsense because, Lepore says, it was only true for a few, carefully chosen case histories over very short time frames. The point was made even better by Christensen himself when, in 2007, he made the confident prediction that Apple’s new iPhone would fail.

Nevertheless, disruption still grips the business imagination, perhaps because it sounds so exciting. In Luddism you smash the employer’s machines; in disruption theory you smash the competitor’s. The extremity of disruptive theory provides an accidental justification for extreme Luddism. Yet still, technocratic propaganda routinely uses the vocabulary of disruption theory.

Read more.

Link: How Social Media Silences Debate

The Internet might be a useful tool for activists and organizers, in episodes from the Arab Spring to the Ice Bucket Challenge. But over all, it has diminished rather than enhanced political participation, according to new data.

Social media, like Twitter and Facebook, has the effect of tamping down diversity of opinion and stifling debate about public affairs. It makes people less likely to voice opinions, particularly when they think their views differ from those of their friends, according to a report published Tuesday by researchers at Pew Research Center and Rutgers University.

The researchers also found that those who use social media regularly are more reluctant to express dissenting views in the offline world.

The Internet, it seems, is contributing to the polarization of America, as people surround themselves with people who think like them and hesitate to say anything different. Internet companies magnify the effect, by tweaking their algorithms to show us more content from people who are similar to us.

The study asked participants about the revelations of government spying made by Edward Snowden, a widely discussed issue on which Americans were almost equally divided.CreditFrederick Florin/Agence France-Presse — Getty Images

“People who use social media are finding new ways to engage politically, but there’s a big difference between political participation and deliberation,” said Keith N. Hampton, an associate professor of communication at Rutgers and an author of the study. “People are less likely to express opinions and to be exposed to the other side, and that’s exposure we’d like to see in a democracy.”

The researchers set out to investigate the effect of the Internet on the so-called spiral of silence, a theory that people are less likely to express their views if they believe they differ from those of their friends, family and colleagues. The Internet, many people thought, would do away with that notion because it connects more heterogeneous people and gives even minority voices a bullhorn.

Instead, the researchers found, the Internet reflects the offline world, where people have always gravitated toward like-minded friends and shied away from expressing divergent opinions. (There is a reason for the old rule to avoid religion or politics at the dinner table.)

And in some ways, the Internet has deepened that divide. It makes it easy for people to read only news and opinions from people they agree with. In many cases, people don’t even make that choice for themselves. Last week, Twitter said it would begin showing people tweets even from people they don’t follow if enough other people they follow favorite them. On Monday, Facebook said it would hide stories with certain types of headlines in the news feed. Meanwhile, harassment from online bullies who attack people who express opinions has become a vexing problem for social media sites and their users.

Humans are acutely attuned to the approval of others, constantly reading cues to judge whether people agree with them, the researchers said. Active social media users get many more of these cues — like status updates, news stories people choose to share and photos of how they spend their days — and so they become less likely to speak up.

For the study, researchers asked people about the revelations of National Security Agency surveillance by the whistle-blower Edward Snowden, a topic on which Americans were almost evenly divided.

Most people surveyed said they would be willing to discuss government surveillance at dinner with family or friends, at a community meeting or at work. The only two settings where most people said they would not discuss it were Facebook and Twitter. And people who use Facebook a few times a day were half as likely as others to say they would voice an opinion about it in a real-world conversation with friends.

Yet if Facebook users thought their Facebook friends agreed with their position on the issue, they were 1.9 times more likely to join a discussion there. And people with fervent views, either in favor of or against government spying, were 2.4 times more likely to say they would join a conversation about it on Facebook. Interestingly, those with less education were more likely to speak up on Facebook, while those with more education were more likely to be silent on Facebook yet express their opinion in a group of family or friends.

The study also found that for all the discussion of social media becoming the place where people find and discuss news, most people said they got information about the N.S.A. revelations from TV and radio, while Facebook and Twitter were the least likely to be news sources.

These findings are limited because the researchers studied a single news event. But consider another recent controversial public affairs story that people discussed online — the protests in Ferguson, Mo. Of the posts you read on Twitter and Facebook from people you know, how many were in line with your point of view and how many were divergent, and how likely were you to speak up?

Link: The Conservatism of Emoji

Emoji offer you new possibilities for digital expression, but only if you’re speaking their language.

If you smile through your fear and sorrow
Smile and maybe tomorrow
You’ll see the sun come shining through for you
—Nat King Cole, “Smile”

The world will soon have its first emoji-only social network: Emoj.li. This news, announced in late June, was met with a combination of scorn and amusement from the tech press. It was seen as another entry in the gimmick-social-network category, to be filed alongside Yo. Yet emoji have a rich and complex history behind the campy shtick: From the rise of the smiley in the second half of the 20th century, emoji emerged out of corporate strategies, copyright claims, and standards disputes to become a ubiquitous digital shorthand. And in their own, highly compressed lexicon, emoji are trying to tell us something about the nature of feelings, of labor, and the new horizons of capitalism. They are the signs of our times.

Innocuous and omnipresent, emoji are the social lubricant smoothing the rough edges of our digital lives: They underscore tone, introduce humor, and give us a quick way to bring personality into otherwise monochrome spaces. All this computerized work is, according to Michael Hardt, one face of what he terms immaterial labor, or “labor that produces an immaterial good, such as a service, knowledge, or communication.” “We increasingly think like computers,” he writes, but “the other face of immaterial labor is the affective labor of human conduct and interaction” — all those fast-food greetings, the casual banter with the Uber driver, the flight attendant’s smile, the nurse patting your arm as the needle goes in. Affective labor is another term for what sociologist Arlie Russell Hochschild calls “emotional labor,” the commercialization of feelings that smooth our social interactions on a daily basis. What if we could integrate our understand of these two faces of immaterial labor through the image of yet another face?

Emoji as Historical Artifacts

The smiley face is now so endemic to American culture that it’s easy to forget it is an invented artifact. The 1963 merger of the State Mutual Life Assurance Company of Worcester, Mass., and Ohio’s Guarantee Mutual Company would be unremembered were it not for one thing: :), or something very much like it. An advertising man named Harvey Ball doodled a smiling yellow face at the behest of State Mutual’s management, who were in need of an internal PR campaign to improve morale after the turmoil and job losses prompted by the merger. The higher-ups loved it. “The power of a smile is unlimited,” proclaimed The Mutualite, the company’s internal magazine, “a smile is contagious…vital to business associations and to society.” Employees were encouraged to smile while talking to clients on the phone and filling out insurance forms. Ball was paid $240 for the campaign, including $45 for the rights to his smiley-face image.

Gradually, the smiley became a pop-culture icon, distributed on buttons and T-shirts, beloved of acid-house record producers. Its first recognized digital instantiation came via Carnegie Mellon’s Scott E. Fahlman, who typed :-) on a university bulletin board in 1982 in the midst of talking about something else entirely.

Nabokov, Fahlman remembered, had called for such a symbol in an interview with the New York Times back in 1969:

Q: How do you rank yourself among writers (living) and of the immediate past?

Nabokov: I often think there should exist a special typographical sign for a smile — some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question.

But it took 15 years after Fahlman’s innovation for emoji to appear — and they went big in Japan. Shigetaka Kurita, a designer for Japanese telecom carrier NTT Docomo, was instructed to create contextual icons for the company as a way to define its brand and secure customer loyalty. He devised a character set intended to bring new emotional clarity to text messages. Without emoji, Kurita observed, “you don’t know what’s in the writer’s head.” When Apple introduced the iPhone to Japan in 2008, users demanded a way to use emoji on the new platform. So emoji were incorporated into Unicode, the computer industry’s standard for characters administered by the Unicode Consortium. At that moment, emoji became interoperable on devices around the world, and Ball’s smiley face had been reified at the level of code.

Emoji as Technics

By some accounts, there are now more than 880 extent emoji that have been accepted by the Consortium and consolidated in Unicode. Control over emoji has become highly centralized, yet they make up a language with considerable creative potential.

With only 144 pixels each, emoji must compress a face or object into the most schematic configuration possible. Emoji, like other skeuomorphs — linoleum that looks like wood grain, the trash bin on your desktop, the shutter click sound on a digital camera — are what anthropologist Nicholas Gessler calls “material metaphors” that “help us map the new onto an existing cognitive structure.” That skeumorphism allows for particular types of inventiveness and irony. So the emojiScreen Shot 2014-08-14 at 8.02.52 PM might act as a pictogram (“I stepped in a pile ofScreen Shot 2014-08-14 at 8.02.52 PM, an ideogram (“that movie was Screen Shot 2014-08-14 at 8.02.52 PM, an emoticon (“I feel Screen Shot 2014-08-14 at 8.02.52 PM, or a phatic expression (“I’m tired.” “Screen Shot 2014-08-14 at 8.02.52 PM”). That’s some powerful contextual Screen Shot 2014-08-14 at 8.02.52 PM.

Yet this flexibility has a broader business purpose, one that goes hand-in-hand with the symbols’ commercial roots: emoji have been proprietary whenever it was possible for companies to do so. NTT Docomo was unable to secure copyright on its original character set, and competitors J-Phone and DDI Cellular Group soon produced rival emoji character sets, which were made available exclusively on their competing software platforms. Emoji were a practical and experiential icon of brand difference; their daily use drove the uptake of a particular platform, and by extension helped establish particular technical standards across the industry. But the popularity of emoji meant they were hard to contain: user complaints about the illegibility of a competitor’s emoji on their phones meant the telcos had to give up on making money off emoji directly. It was the necessity born of linguistic practice over time that prompted these grudging steps towards a technical and business consensus.

Hardt argues that affect is perennially more powerful than the forces attempting to harness it, and it would be tempting to think of emoji in this context. But emoji remain a restricted, top-down language, controlled by the Unicode Consortium and the technical platforms that display them. Media theorist Laura Marks uses the term lame infinity to describe the phenomenon where digital technology seems infinite but is used to produce a dispiriting kind of sameness. Emoji, as “a perfectly normcore system of emotion: a taxonomy of feeling in a grid menu of ideograms” fit that description. While emoji offer creative expression within their own terms, they also may confine us to a type of communicative monoculture. What’s more, emoji also hold out the promise of emotional standardization in the service of data analysis: If a feeling can be summed up in a symbol, then theoretically that feeling can be more easily tracked, categorized, and counted.

Emoji as Data Culture

We love emoji, and emoji depict our love, while also transforming our states of feeling into new forms of big data. Many platforms and media companies are extracting and analyzing emoji as a new source of insight into their customers’ emotions and desires. In the spring of 2013, Facebook introduced the ability to choose from a variety of emoji-like moods as part of a status update. Users can register that they feel happy, sad, frustrated, or a variety of other emotions. And with the recent uproar over the Facebook emotional-contagion study, it’s increasingly clear that quantifying, tracking and manipulating emotion is an important part of the company’s business model. “By selecting your current activity instead of merely writing it out, you structure data for Facebook,” TechCrunch observed when the feature was rolled out. And sentiment-analysis firms like Lexalytics are working to incorporate emoji into their business models.

In many ways, emoji offer us a deeply restricted world. This character set is valorized for its creative uses — such as Emoji Dick, Fred Benenson’s crowdsourced, book-length rewriting of Melville’s Moby Dick as emoji, which was accepted into theLibrary of Congress. But it is also constrained at the level of social and political possibility. Emoji are terrible at depicting diversity: on Apple’s iOS platform, for example, there are many white faces, but only two seem Asian and none are black. Responding to public outcry, Apple now says it is “working closely with the Unicode Consortium in an effort to update the standard.”

Emoji raise the question: What habits of daily life do emoji promote, from the painted nails to the martini glasses? What behavior do they normalize? By giving us a visual vocabulary of the digital everyday, emoji offer an example of what Foucault termed “anatamo-politics”: the process by which “the production of collective subjectivities, sociality, and society itself” is worked through at the level of individual practices and habits. And in a broad sense, what emoji are trying to sell us, if not happiness, is a kind of quiescence. In Katy Perry’s “Roar” video from 2013, for example, we see emoji transliterations of the song’s lyrics. But is also an eerily stark commentary on the basic anatamo-political maintenance of daily life – sleeping, eating, bathing, grooming, charging our devices. The habitual maintenance depicted in the video goes hand in hand with the “basic” emoji character set.

In a similar vein, the unofficial music video for Beyoncé’s “Drunk in Love” has brilliant, quick-fire emoji translation using characters from Apple’s proprietary font in front of a plain white background. The genius of the emoji “Drunk in Love” lies in how it perfectly conjures Beyoncé’s celebrity persona, and the song’s sexualized glamour, out of the emoji character set. Emoji can represent cocktails, paparazzo attacks, and other trappings of Western consumer and celebrity culture with ease. More complicated matters? There’s no emoji for that.

Emoji as Soft Control

“This face is a symbol of capitalism,” declared Murray Spain to the BBC. Spain was one of the entrepreneurs who, in the early 1970s, placed a copyright on the smiley face with the phrase “Have a nice day.” “Our intent was a capitalistic intent…our only desire was to make a buck.” The historical line connecting the smiley face to emoji is crooked but revealing, featuring as it does this same sentiment repeated again and again: the road to the bottom line runs through the instrumentalization and commodification of emotion.

Now with many Silicon Valley technology corporations adding Chief Happiness Officers, the impulse to obey the smiley has become supercharged. Emoji, like the original smiley, can be a form of “cruel optimism,” which affect theorist Lauren Berlant defines as “when the object/scene that ignites a sense of possibility actually makes it impossible to attain.” Emoji help us cope emotionally with the technological platforms and economic systems operating far outside of our control, but their creative potential is ultimately closed off. They are controlled from the top down, from the standards bodies to the hard-coded limits on what your phone will read.

Emoji offer us a means of communicating that we didn’t have before: they humanize the platforms we inhabit. As such, they are a rear-guard action to enable sociality in digital networks, yet are also agents in turning emotions into economic value. As a blip in the continuing evolution of platform languages, emoji may be remembered as ultimately conservative: digital companions whose bright colors and white faces had nothing much to say about our political impasses.

Link: Adam Curtis / Now Then: The Hidden Systems That Have Frozen Time And Stop Us Changing The World

If you are an American politician today, as well as an entourage you also have a new, modern addition. You have what’s called a “digital tracker”. They follow you everywhere with a high-definition video camera, and they are employed by the people who want to destroy your political career.

It’s called “opposition research” and the aim is to constantly record everything you say and do. The files are sent back every night to large anonymous offices in Washington where dozens of researchers systematically compare everything you said today with what you said in the past.

They are looking for contradictions. And if they find one - they feed it, and the video evidence, to the media.

On one hand it’s old politics - digging up the dirt on your opponent. But it is also part of something new - and much bigger than just politics. Throughout the western world new systems have risen up whose job is to constantly record and monitor the present - and then compare that to the recorded past. The aim is to discover patterns, coincidences and correlations, and from that find ways of stopping change. Keeping things the same.

We can’t properly see what is happening because these systems are operating in very different areas - from consumerism, to the management of your own body, to predicting future crimes, and even trying to stabilise the global financial system - as well as in politics.

But taken together the cumulative effect is that of a giant refrigerator that freezes us, and those who govern us, into a state of immobility, perpetually repeating the past and terrified of change and the future.

To bring this system into focus I want to tell the history of its rise, and its strange roots - the bastard love-child of snooping and high-level mathematical theory.

It begins with the grubby figure of the early 1960s in Britain - the Private Detective. Up till then private detectives mostly did divorce work. They would burst into hotel rooms to find a married person engaged in adulterous activity. Often these were prearranged situations, set up to supply the necessary evidence to get round Britain’s tough divorce laws.

Then two things happened. The divorce laws were reformed - which meant the bottom fell out of the market. But at the same time home movie cameras became cheap and available. Private detectives began to spend their time hiding round corners and behind bushes - recording what their suspects got up to.

Here are two clips I’ve put together. The first is one of the old-school private detectives going to a hotel room in Brighton to “surprise” the occupants. Followed by a wonderful item from 1973 where one of the new breed shows how he can film people without them noticing. Or so he says. From the evidence you’d doubt it.

He mostly works for the insurance companies - following people and filming them to see if they are faking an injury they are claiming for. I love the 8mm cameras he uses.

The item also includes an interview with a man who is opposed to this snooping. The interviewer says surely they are just trying to find the truth - that a film can’t lie. The man’s response is great:

A film can lie very easily - the insurance company or the investigator can edit the film. Supposing someone has a bad limp that only occurs on wet days, or it’s a nervous spasm that comes on some days rather than others. The film is shown in court - and shows only the good days when there’s no limp

It’s wonderfully silly - but he has a point. Bit like documentary films.

Then - in the early 1970s - the private detectives found they could buy another kind of technology really cheaply.

Bugging equipment.

A new business grew up - often based in tiny rooms above electronic hardware shops in central London. An odd collection of electrical engineers and refugees from the music industry spent their days soldering together miniature transmitters and microphones - and selling them to the private investigators.

Here’s a really good film made about this new world in 1973. It not only reports on what is happening - but also catches the essence of what was coming. Most of the film is just set in one room where there are three hidden bugs as well as the normal camera and microphone recording the presenter who is called Linda Blandford. But she doesn’t know where they are.

The film evokes the strange repetitive nature of an enclosed world where everything is recorded and played back. Way ahead of its time. It’s a smart bit of reporting.

But then - at the end of the 1970s - people began to get worried. It began with revelations that the security agencies were eavesdropping not just on enemy spies but on their own people. Trades unions, radical journalists, politicians had all had their phones bugged.

It quickly spread to a wider concern about all the snooping and bugging that was going on, not just by the state but by private investigators, and by journalists. It was the start of the concern that Britain was becoming a “surveillance society”.

Here is a bit report from that time about the growing fears. By now the private detective had become a man in a phone box blowing a harmonic whistle into the mouthpiece.

Journalists also started to get keen on all this new technology. It allowed them to snoop and listen to people in new ways. Here is great section from a fly-on-the wall documentary made about the News of the World in 1981.

There’s a wonderful assistant editor who is convinced that Special Branch is bugging his phone. While reporter David Potts is testing his bugging equipment that’s going to be used by Tina the junior reporter to expose a child sex ring in North London.

What then happens to Mr Potts’ scoop is very funny. And it shows how difficult it was back then to bug someone. It’s obvious that what they needed to find was an easier way of snooping on peoples’ lives.

In 1987 the growing paranoia finally burst out. The trigger was a BBC television series called The Secret Society made by an investigative journalist called Duncan Campbell.

In 6 half-hour films Campbell pulled what had been happening all together - and drew a frightening picture that still haunts the imagination of the liberal left.

Not only were the security services and the police secretly watching and listening to you - but dark elements of the “security state” had a corrupt relationship with the private security world. The films showed how investigators could easily buy confidential information on anyone.

And at the same time other secret bureaucracies were building giant listening networks - and keeping them hidden from politicians. One of the episodes was about the plans to launch a spy satellite called Zircon. Campbell revealed that the project had been kept hidden from the very politicians who were supposed to oversee it.

The government and the head of GCHQ panicked and put enormous pressure on the BBC, who caved in and said they wouldn’t transmit the episode. It was an enormous scandal - and it seemed to prove dramatically everything that Campbell was saying about the secret state who watched you - but didn’t want you to know things.

Here are some extracts from the series. In one bit Campbell reveals how, as well as the state, the private sector have developed huge computer databases full of information about millions of ordinary people. In a great sequence he goes to a market in Knaresborough in Yorkshire and asks people if they’d like to see what these databases know about them.

Their reactions of horror to what they are shown are so innocent. It’s like a lost world.

I’ve also included a brief bit from the Zircon film - so you can see what all the fuss was about. It didn’t remain banned for long - and has been shown since.

Looking back you can see how programmes like the Secret Society were part of the growing distrust of those who governed us. They seemed to prove that there were hidden, unaccountable and corrupt forces at the heart of the British state.

And the paranoia about surveillance carried on growing.

But at the very time as this happened - a new system of watching and monitoring people rose up. It would do pretty much what the spies and the private detectives had been trying to do - but much much more. It would record not just all our actions - but also be able to understand what was going on inside our heads - our wishes, our desires and our dislikes.

It was called the internet.

The problem was that the only way for the systems on the internet to work would be with our willing collusion. But rather than reject it - we all embraced it. And it flourished.

The key to why this happened lies in an odd experiment carried out in a computer laboratory in California in 1966.

A computer scientist called Joseph Weizenbaum was researching Artificial Intelligence. The idea was that computers could be taught to think - and become like human beings. Here is a picture of Mr Weizenbaum.

There were lots of enthusiasts in the Artificial Intelligence world at that time. They dreamt about creating a new kind of techno-human hybrid world - where computers could interact with human beings and respond to their needs and desires.

Weizenbaum though was sceptical about this. And in 1966 he built an intelligent computer system that he called ELIZA. It was, he said, a computer psychotherapist who could listen to your feelings and respond - just as a therapist did.

But what he did was model ELIZA on a real psychotherapist called Carl Rogers who was famous for simply repeating back the the patient what they had just said. And that is what ELIZA did. You sat in front of a screen and typed in what you were feeling or thinking - and the programme simply repeated what you had written back to you - often in the form of a question.

Weizenbaum’s aim was to parody the whole idea of AI - by showing the simplification of interaction that was necessary for a machine to “think”. But when he started to let people use ELIZA he discovered something very strange that he had not predicted at all.

Here is a bit from a documentary where Weizenbaum describes what happened.

Weizenbaum found his secretary was not unusual. He was stunned - he wrote - to discover that his students and others all became completely engrossed in the programme. They knew exactly how it worked - that really they were just talking to themselves. But they would sit there for hours telling the machine all about their lives and their inner feelings - sometimes revealing incredibly personal details.

His response was to get very gloomy about the whole idea of machines and people. Weizenbaum wrote a book in the 1970s that said that the only way you were going to get a world of thinking machines was not by making computers become like humans. Instead you would have to do the opposite - somehow persuade humans to simplify themselves, and become more like machines.

But others argued that, in the age of the self, what Weizenbaum had invented was a new kind of mirror for people to explore their inner world. A space where individuals could liberate themselves and explore their feelings without the patronising elitism and fallibility of traditional authority figures.

When a journalist asked a computer engineer what he thought about having therapy from a machine. He said in a way it was better because -

after all, the computer doesn’t burn out, look down on you, or try to have sex with you

ELIZA became very popular and lots of researchers at MIT had it on their computers. One night a lecturer called Mr Bobrow left ELIZA running. The next morning the vice president of a sales firm who was working with MIT sat down at the computer. He thought he could use it to contact the lecturer at home - and he started to type into it.

In reality he was talking to Eliza - but he didn’t realise it.

This is the conversation that followed.

But, of course, ELIZA didn’t ring him. The Vice President sat there fuming - and then decided to ring the lecturer himself. And this is the response he got:

Vice President - “Why are you being so snotty to me?”

Mr Bobrow - “What do you mean I am being snotty to you?”

Out of ELIZA and lots of other programmes like it came an idea. That computers could monitor what human beings did and said - and then analyse that data intelligently. If they did this they could respond by predicting what that human being should then do, or what they might want.

They key to making it work was a system called Boolean Logic.

It had been invented back in 1847 by a mathematician called George Boole. One day he’d been walking across a field near Doncaster when he had what he described as a “mystical experience”. Boole said that he felt he had been “called on to express the workings of the human mind in symbolic or mathematical form”.

Boole’s idea was that everything that went on in the human mind could be reduced to a series of yes or no decisions that could be written out on paper using symbols.

His idea was pretty much ignored for over a hundred years - except by Lewis Carroll who as well as writing Alice in Wonderland, wrote a book called Symbolic Logic - that laid out and developed Boole’s ideas.

But when computers were invented people immediately realised that Boole’s idea could be used to allow the computers to “think” in a reasoned way. Computers were digital - they were either 0 or 1 - and that was the same as “yes” and “no”. So Boolean Logic became central to the way computers work today. They are full of endless decision trees saying “if this happened then this, and not this”.

Here is a picture of George Boole taken in 1864. It was just before he died and it is one of the earliest portrait photos - he’d stopped off at the new London School of Photography at 174 Regent Street.

In the early 1990s researchers became convinced they could get computers to predict what people might want.

It started in 1992 with a small unit set up in the University of Minnesota. They called themselves GroupLens. Their idea was that if you could collect information on what people liked and then compare the data, you would find patterns - and from that you could make predictions.

They called it “Collaborative Filtering” - and the logic was beautifully Boolean. As one researcher put it -

If Jack loves A and B and Jill loves A, B, and C then Jack is more likely to love C.

They began by comparing the news articles that people recommended in online newsgroups-

GroupLens monitored user ratings of news articles. After a user had rated several items GroupLens was able to make recommendations about other articles the user might like. The results were astounding. Users read articles that we recommended highly three to four times as often as those we didn’t

Then, in 1994, a young professor at MIT did the same with music. She was called Pattie Maes - and she designed a system called RINGO. She set up a website where people listed songs and bands they liked. One user described how it worked

What Ringo did was give you 20 or so music titles by name, then asked one by one whether you liked it, didn’t like it, or knew it at all. That initialized the system with a small DNA of your likes and dislikes. Thereafter, when you asked for a recommendation, the program matched your DNA with that of all the others in the system. If some of the matches were not successful - saying so would perfect your string of bits. Next time would be even better

Again it worked amazingly well. And Maes started to do the same with movies. Then the University of Minnesota group had a brainwave. If these systems could tell you what articles and songs you would like - why couldn’t they tell you what products you would like as well?

So in 1997 they set up a company called Net Perceptions. And one of their first clients was Amazon.

But one of Amazon’s young software engineers called Greg Linden soon realised that there was a problem with these systems. You had to spend all your time finding out what people said they liked. And as the systems became bigger and bigger - this was proving incredibly cumbersome.

Plus - people were fickle and they changed their mind a lot. Or - in computer engineer speak - they were “dynamic”.

Linden saw what the solution was. You give up finding out what people said they liked and instead you just look at what they’ve done in the past. You assembled all the data from people’s history - all the stuff they’ve looked at and bought in the past - and then compared that with other peoples’ past.

Out of that came patterns and correlations that the human brain could not possibly see - but from those correlations you could tell what individuals would want in the future.

Linden was part of what was called The Personalization Group in Amazon. He said:

the joke in the group was that if the system were working perfectly, Amazon should just show you one book - which is the next book you are going to buy.

And it worked - sales soared, and Jeff Bezos who runs Amazon allegedly crawled up to Linden on his hands and knees saying “I am not worthy”.

What Amazon and many other companies began to do in the late 1990s was build up a giant world of the past on their computer servers. A historical universe that is constantly mined to find new ways of giving back to you today what you liked yesterday - with variations.

Interestingly, one of the first people to criticise these kind of “recommender systems” for their unintended effect on society was Patti Maes who had invented RINGO. She said that the inevitable effect is to narrow and simplify your experience - leading people to get stuck in a static, ever-narrowing version of themselves.

Stuck in the endless you-loop. Just like with ELIZA

But like so much of the modern digital world - these new systems are very abstract. And there is little to see that happens apart from endless fingers on keyboards. So it’s difficult to bring these effects into any kind of real focus.

Last year - in a live show I did with Massive Attack - we tried to evoke this new world. We used a song from the 1980s called “Bela Lugosi’s dead” - which I love because it has a very powerful feel of repetition. The audience were surrounded by 11 twenty-five foot high screens.

I’m not sure how successfully we did it - but what I was trying to show is how your past is continually being replayed back to you - like a modern ghost. And it means we stand still unable to move forwards. Like a story that’s got stuck.

I’ve put a short bit of it together from some camera-phone videos shot by the audience in New York. It’s a bit rough - as is the sound - but you’ll get a sense of it.

For all the online companies that use these systems, the fact that they tend to inhibit change is an unintended consequence.

But there are other - more powerful systems that grew up in the 1990s whose explicit aim is exactly that. To prevent the world from changing, and hold it stable.

And they operate in exactly the same way - by constantly monitoring the world and then searching their vast databases for patterns and correlations.

ALADDIN is the name of an incredibly powerful computer network that is based in a tiny town called East Wenatchee - it’s in the middle of nowhere in Washington State in North America.

Aladdin guides the investment of over $11 trillion of assets around the world.

This makes it incredibly powerful. Aladdin is owned by a company called Blackrock that is the biggest investor in the world. It manages as much money as all the hedge-funds and the private equity firms in the world put together. And its computer watches over 7% of all the investments in the world.

This is unprecedented - it’s a kind of power never seen before. But Blackrock is not run by a greedy, rapacious financier - the traditional figure of recent journalism. Blackrock is run by the very opposite - a very cautious man called Mr Fink

Here he is. He’s called Larry Fink

Back in 1986 Mr Fink was working his way up the First National Bank of Boston when an unpredicted fall in interest rates caused a disaster for the bank. He swore that it would never happen again - and for 20 years he built Aladdin.

It has within its memory a vast history of the past 50 years - not just financial - but all kinds of events. What it does is constantly take things that happen in the present day and compares them to events in the past. Out of the millions and millions of correlations - Aladdin then spots possible disasters - possible futures - and moves the investments to avoid that future happening.

I can’t over-emphasise how powerful Blackrock’s system is in shaping the world - it’s more powerful in some respects than traditional politics.

And it raises really important questions. Because its aim is to not change the world - but to keep it stable. Preventing any development thats too risky. And when you are moving $11 trillion around to do that -it is a really important new force.

But it’s boring. And there is no story. Just patterns.

Here is some video of Aladdin. A few weeks ago I was filming in Idaho - and decided to go and have a look at the buildings that house Aladdin. I had asked Blackrock if I could have a look inside. Surprisingly the guy in charge of their PR said yes. But a little while later he left the company in what seemed to be a reorganisation.

But it didn’t really matter - because you know what it will look like. Row upon row of servers roaring away, and surrounded by giant batteries that will rescue the system if the power supply goes.

Here’s the shot from the car driving past the computer sheds that house Aladdin. A 37 seconds tracking shot, and you can see how dull it is.

It is the modern world of power - and it’s incredibly boring. Nothing to film, run by a cautious man who is in no way a wolf of Wall Street. It’s how power works today. It hides in plain sight - through sheer boringness and dullness.

No wonder we find it difficult to tell stories about it.

There are also a growing number of systems that use data from the past to predict whether individuals are going to commit crimes in the future.

On the surface it’s laudable. But it’s also rather weird - and in some cases can be false and dangerous.

In every case the systems monitor individuals’ behaviour and then sees if that shares similar characteristics with groups of other people stored on the databases who have behaved dangerously in the past.

There is software being used by the Department of Work and Pensions that detects fraudsters by analysing the voices of people who ring its call centres. If you ask the wrong kind of questions - or even ask the right kind of questions in the wrong way - it puts you in the dangerous group.

The government also has what they call a Social Exclusion unit which has an Action Plan. It’s aim is to use data to predict when things might go wrong in poor families - even before birth. In one scheme the unborn child of a pregnant mother might be categorised as potentially being a future criminal.

This is based on things like the mother’s age, her poor educational achievements, her drug use and her own family history. If the system decides that the unborn child is a potentially dangerous criminal the response is not exactly Philip K Dick - a nurse is sent round to give advice on parenting.

But the oddest is STATIC-99. It’s a way of predicting whether sex offenders are likely to commit crimes again after they have been released. In America this is being used to decide whether to keep them in jail even after they have served their full sentence.

STATIC-99 works by scoring individuals on criteria such as age, number of sex-crimes and sex of the victim. These are then fed into a database that shows recidivism rates of groups of sex-offenders in the past with similar characteristics. The judge is then told how likely it is - in percentage terms - that the offender will do it again.

The problem is that it is not true. What the judge is really being told is the likely percentage of people in the group who will re-offend. There is no way the system can predict what an individual will do. A recent very critical report of such systems said that the margin of error for individuals could be as great as between 5% and 95%

In other words completely useless. Yet people are being kept in prison on the basis that such a system predicts they might do something bad in the future.

Opposition Research - the constant recording of everything a politician says and does fits into the same pattern.

But it’s a system of wonk-driven surveillance that goes even further - because it has the unforeseen consequence of forcing politicians to behave like machines. It leads them to constantly repeat what they said yesterday, and unable to make imaginative or creative leaps

Every night the digital tracker sends back what that politician said or did today. The first aim is to find something outrageous in that day’s video that can be given to the media.

Here is one classic example. It is Jon Bruning who compared welfare recipients to racoons. His speech up to that point is actually quite a funny right-wing attack on what he sees as the absurdity of environmental protection. But then he went too far.

And he was shamed. And he lost the election.

But the other - bigger - task of the opposition researchers is to spend hours comparing what the politician said today with their recorded past that is stored in the computers. They look for contradictions and if they find one they release the videos to the media and again the politician is shamed.

So the politicians become frozen and immobile - because they have to have a blameless history. Which again seems laudable. But it means they can’t change their mind. They can’t adapt to the world as it changes.

Although if ALADDIN has its way that won’t matter

George Boole - who helped start all this with his Boolean Logic had an extraordinary family.

One of them, his son-in-law was called Charles Howard Hinton. He too was a mathematician and he became famous at the end of the nineteenth century when he wrote a book called The Fourth Dimension.

It said that time was an illusion. That everything that has happened and that will happen already exists in a four-dimensional space. Human beings, Hinton said, don’t realise this because they don’t have the ability to see this four-dimensional world.

Our idea of time - Hinton said - is just a line that goes across this four-dimensional space like a cross section. But we can’t see it.

The cumulative effect of all today’s systems that store up data from the past is to create something rather like Hinton’s world. Everything that has already happened is increasingly stored on the giant servers in places like East Wenatchee.

It never goes away. And this past bears down on the present - continually being replayed to try and avoid anything that is dangerous and unpredictable.

What is missing is the other half of Hinton’s world. The future - with all its dangers, but also all it’s possibilities.

But George Boole had another daughter called Ethel. She had an amazing life - which showed that there is another way. Because Ethel believed in the future.

Here she is in a Boole family photo - taken after her father died. Ethel is to her mother’s right. (Incidentally the rest of the Boole family that you see in this photo also had amazing lives - but that’s another story)

When Ethel was 15 she read a book about the Italian revolutionary Mazzini. It inspired her - and she wore clothes like him, dressing in black in mourning for the state of the world.

In 1889 she met a Polish revolutionary called Wilfred Michail Voynich. He had escaped from Siberia and had arrived penniless in London. They fell in love and married, and Ethel went off to Russia to smuggle in illegal revolutionary publications.

Then she met the master-spy Sydney Reilly. He is one of the most extraordinary figures in the odd world of espionage. He’d been born in the Ukraine, but turned against his family and faked his own suicide to escape.

After all kinds of adventures, including rescuing three British intelligence agents from the swamps of the Amazon jungles, Reilly went to London where he spent his time gambling - and he and Ethel began a passionate affair. They eloped to Italy where Reilly bared his soul to Ethel - telling her the extraordinary story of his life.

Then Reilly deserted her - and went off to Russia where he worked as a secret agent for the British. Ian Fleming is said to have used Reilly as the model for James Bond.

Ethel was heartbroken - and she wrote a novel called The Gadfly which, although she never admitted it, her biographer says is obviously based on the early adventures of Sydney Reilly.

It’s the most amazing book. It’s an over the top melodrama set in Italy about the hero, Arthur’s battle against the church and the corrupt state - and his treacherous family. At the same time it is about his passionate love for an english girl - Gemma. It ends with Arthur being slowly tortured and then condemned to be shot.

Its message though is a revolutionary one. Arthur is sacrificed so that humankind can be redeemed and open the way to a realisation of the future possibilities for the world - once the old oppressive forces have been overthrown.

Here is Ethel with a wonderful revolutionary look in her eyes

The Gadfly was published in 1897 in New York - under Ethel’s married name, E.L. Voynich. No British publisher would touch it because of its “outrageous and horrible character”. But then it was published in Russia and became an astonishing success. One writer describes how all the young Bolsheviks read it and “it virtually became the bible of the revolution”.

By the 1960s it was estimated that 250 million Russian teenagers had read the Gadfly in translation. And polls showed that Arthur was consistently the favourite hero of Soviet youth. And in 1955 a film version was made - with a soundtrack by Shostakovich - which won an award at the Cannes film festival.

In 1920 Ethel went back to her husband Wilfred Voynich. He had moved to New York and had become one of the world’s greatest expert and dealers in rare books.

His most famous purchase was a mysterious manuscript written in code that has come to be known as The Voynich Manuscript. No one has ever been able to break the code - it seems to have many scientific references, and herbal and astronomical illustrations.

Voynich believed that it was written by the philosopher Roger Bacon - and then came into the possession of the legendary John Dee who was a mathematician at the court of Queen Elizabeth.

After Voynich died, Ethel kept the manuscript in a safe deposit box in New York for thirty years - and then sold it in 1960. And it ended up in Yale University. One of the great experts in cryptography wrote:

The Voynich manuscript lies quietly inside its slipcase in the blackness of Yale’s vaults, possibly a time-bomb in the history of science, awaiting the man who can interpret what is still the most mysterious manuscript in the world.

Ethel Boole died in 1960 at the age of 96. Still believing in the power of revolution to change the world. Here is one of the most beautiful sections of Shostakovich’s music for the Gadfly - cut to images of the strange Boolean world that we live in today.

Link: Out of Sight

The Internet delivered on its promise of community for blind people, but accessibility is easy to overlook.

I have been blind since birth. I’m old enough to have completed my early schooling at a time when going to a special school for blind kids was the norm. In New Zealand, where I live, there is only one school for the blind. It was common for children to leave their families when they were five, to spend the majority of the year far from home in a school hostel. Many family relationships were strained as a result. Being exposed to older kids and adults with the same disability as you, however, can supply you with exemplars. It allows the blind to see other blind people being successful in a wide range of careers, raising families and being accepted in their local community. A focal point, such as a school for the blind, helps foster that kind of mentoring.

The Internet has expanded the practical meaning of the word community. New technology platforms aren’t often designed to be accessible to people unlike the designers themselves, but that doesn’t mean they aren’t used by everyone who can. For blind people, the Internet has allowed an international community to flourish where there wasn’t much of one before, allowing people with shared experiences, interests, and challenges to forge a communion. Just as important, it has allowed blind people to participate in society in ways that have often otherwise been foreclosed by prejudice. Twitter has been at the heart of this, helping bring blind people from many countries and all walks of life together. It represents one of the most empowering aspects of the Internet for people with disabilities — its fundamentally textual nature and robust API supporting an ecosystem of innovative accessible apps has made it an equalizer. Behind the keyboard, no one need know you’re blind or have any other disability, unless you choose to let them know.

With the mainstreaming of blind kids now the norm, real-world networking opportunities are less frequent. That’s why the Internet has become such an important tool in the “blind community.” While there’s never been a better time in history to be blind, the best could be yet to come — provided the new shape the Internet takes remains accessible to everyone. In terms of being able to live a quality, independent life without sight, the Internet has been the most dramatic change in the lives of blind people since the invention of Braille. I can still remember having to go into a bank to ask the teller to read my bank balances to me, cringing as she read them in a very loud, slow voice (since clearly a blind person needs to be spoken to slowly).

Because of how scattered the blind community is and how much desire there is for us to share information and experiences, tech-savvy blind people were early Internet adopters. In the 1980s, as a kid with a 2400-baud modem, I’d make expensive international calls from New Zealand to a bulletin-board system in Pittsburgh that had been established specifically to bring blind people together. My hankering for information, inspiration, and fellowship meant that even as a cash-strapped student, I felt the price of the calls was worth paying.

Blind people from around the world have access to many technologies that get us online. Windows screen readers speak what’s on the screen, and optionally make the same information available tactually via a Braille display. Just as some sighted people consider themselves “visual learners,” so some blind people retain information better when it’s under their fingertips. Yes, contrary to popular belief, Braille is alive and well, having enjoyed a renaissance thanks to refreshable Braille display technology and products like commercial eBooks.

Outside the Windows environment, Apple is the exemplary player. Every Mac and iOS device includes a powerful screen reader called VoiceOver. Before Apple added VoiceOver to the iPhone 3GS in 2009, those of us who are blind saw the emergence of touch screens as a real threat to our hard-won gains. We’d pick up an iPhone, and as far as we were concerned, it was a useless piece of glass. Apple came up with a paradigm that made touch screens useable by the blind, and it was a game changer. Android has a similar product which, we hope, will continue to mature.

All this assistive technology means that the technological life I lead isn’t much different from that of a sighted person. I’m sitting at my desk in my office, writing this article in Microsoft Word. Because I lack the discipline to put my iPhone on “Do Not Disturb”, the iPhone is chiming at me from time to time, and I lean over to check the notification. Like other blind people, I use the Internet to further my personal and professional interests that have nothing to do with blindness.

But social trends haven’t kept up with technological ones. It’s estimated that in the United States, around 70 percent of working-aged blind people are unemployed. And the biggest barrier posed by blindness is not lack of sight – it’s other people’s ignorance. Since sight is such a dominant sense, a lot of potential employers close their eyes and think, “I couldn’t do this job if I couldn’t see, so she surely can’t either”. They forget that blindness is our normality. Deprive yourself of such a significant source of information by putting on a blindfold, and of course you’re going to be disorientated. But that’s not the reality we experience. It’s perfectly possible to function well without sight.

Just as there are societal barriers, we’ve yet to reach an accessible tech utopia – far from it. Blind people are inhibited in our full participation in society because not all online technologies are accessible to screen reading software. Most of this problem is due to poor design, some of it due to the choices made by content creators. Many blind people enjoy using Twitter, because text messages of 140 characters are at its core. If you tell me in a tweet what a delicious dinner you’ve had, I can read that and be envious. If you simply take a picture of your dinner and don’t include any text in the tweet, I’m out of the loop. Some blind people were concerned when reporters appeared to have caught a new feature that to allowed full tweets to be embedded in other tweets as an image, which would have meant the conversations which thrived on this platform would be out of reach for our screen readers. Twitter, to its credit, has reached out to us and made clear this was not the case. But even though it turned out to be a false alarm, the Twitter episode brought home to many of us just how fragile accessibility really is.

My voice is sometimes not heard on popular mainstream sites, due to a technology designed to thwart spam bots. Many fully-sighted people complain about CAPTCHA, the hard-to-read characters one sometimes needs to type into a form before you can submit it. Since these characters are graphical, they can stop a blind person in their tracks. Plug-ins can assist in many cases, and sometimes an audio challenge is offered. But the audio doesn’t help people who are deaf as well as blind. It’s encouraging to see an increasing number of sites trying mathematical or simple word puzzles to keep the spammers out, but allow disabled people in.

Many in the media seem wary of “post-text Internet,” a term popularized by economics blogger Felix Salmon in a post explaining why he was joining a television station, Fusion. “Text has had an amazing run, online, not least because it’s easy and cheap to produce,” he wrote. But for digital storytelling, “the possibilities are much, much greater.” Animation, videos, and images appeal to him as an arsenal of tools for a more “immersive” experience. If writers feel threatened by this new paradigm, he suggests, it’s because they’re unwilling to experiment with new models. But for blind people, the threat could be much more grave.

Some mobile apps and websites, despite offering information of interest, are inaccessible. Usually this is because links and buttons containing images don’t offer alternative textual labels. This is where the worry about about being shut out of a “post-text” internet feels most acute. While adding text is an easy way to ensure access to everyone, a wholesale shift in the Internet’s orientation from text to image would further enable designers’ often lax commitment to accessibility.I feel good about how the fusion of mainstream and assistive technologies has facilitated inclusion, but the pace of technological change is frenetic. Hard-won gains are easily lost. It’s therefore essential that we as a society come down on the side of technologies that allow access for all.

While we must be vigilant, there is cause to be optimistic. Blindness often begins to hit teenagers hard at the time their sighted peers are starting to drive. Certainly, not being able to get into a car and drive is a major annoyance of blindness. As a dad to four kids, it requires me to plan our outings a lot more carefully, because of the need to rely on public transport. Self-driving car technology has the potential to change the lives of blind people radically.While concerns persist about Google’s less than stellar track-record on accessibility, products like Google Glass could potentially be used to provide feedback based on a combination of object/face recognition and crowd-sourcing that could help us navigate unfamiliar surroundings more efficiently. Add to that the ability to fully control currently inaccessible, touch-screen-based appliances, and the “Internet of things” has potential for mitigating the impact of blindness – provided we as a society choose to proceed inclusively.

Not only has the Internet expanded the concept of “community”, it has redefined the ways in which traditional communities engage with one another. I don’t need to go to the supermarket and ask for a shelf-packer to help me shop, I can investigate the overwhelming number of choices of just about any product, and take my pick, totally independently. When I interact with any person or business online, they need not know I’m blind, unless I choose to tell them. To disclose or not to disclose is my choice, in any situation. That’s liberating and empowering.

But to fulfill all the promise of the Internet, we must be sure that just as someone in a wheelchair can negotiate a curb cut, open a door or use an elevator, so we must make sure the life-changing power of the Internet is available to us all – whether we see it, hear it, or touch it.

Link: The Lights Are On but Nobody’s Home

Who needs the Internet of Things? Not you, but corporations who want to imprison you in their technological ecosystem

Prepare yourself. The Internet of Things is coming, whether we like it or not apparently. Though if the news coverage — the press releases repurposed as service journalism, the breathless tech-blog posts — is to be believed, it’s what we’ve always wanted, even if we didn’t know it. Smart devices, sensors, cameras, and Internet connectivity will be everywhere, seamlessly and invisibly integrated into our lives, and it will make society more harmonious through the gain of a million small efficiencies. In this vision, the smart city isn’t plagued by deteriorating infrastructure and underfunded social services but is instead augmented with a dizzying collection of systems that ensure that nothing goes wrong. Resources will be apportioned automatically, mechanics and repair people summoned by the system’s own command. We will return to what Lewis Mumford described as a central feature of the Industrial Revolution: “the transfer of order from God to the Machine.” Now, however, the machines will be thinking for themselves, setting society’s order based on the false objectivity of computation.

According to one industry survey, 73 percent of Americans have not heard of the Internet of Things. Another consultancy forecasts $7.1 trillion in annual sales by the end of the decade. Both might be true, yet the reality is that this surveillance-rich environment will continue to be built up around us. Enterprise and government contracts have floated the industry to this point: To encourage us to buy in, sensor-laden devices will be subsidized, just as smartphones have been for years, since companies can make up the cost difference in data collection.

With the Internet of Things, promises of savings and technological empowerment are being implemented as forces of social control. In Chicago, this year’s host city for Cisco’s Internet of Things World Forum, Mayor Rahm Emanuel has used Department of Homeland Security grants to expand Chicago’s surveillance-camera system into the largest in the country, while the city’s police department, drawing on an extensive database of personal information about residents, has created a “heat list” of 400 people to be tracked for potential involvement in violent crime. In Las Vegas, new streetlights can alert surrounding people to disasters; they also have the ability to record video and audio of the surrounding area and track movements. Sometime this year, Raytheon plans to launch two aerostats — tethered surveillance blimps — over Washington, D.C. In typical fashion, this technology, pioneered in the battlefields of Afghanistan and Iraq, is being introduced to address a non-problem: the threat of enemy missiles launched at our capital. When they are not on the lookout for incoming munitions, the aerostats and their military handlers will be able to enjoy video coverage of the entire metropolitan area.

The ideological premise of the Internet of Things is that surveillance and data production equal a kind of preparedness. Any problem might be solved or pre-empted with the proper calculations, so it is prudent to digitize and monitor everything.

This goes especially for ourselves. The IoT promises users an unending capability to parse personal information, making each of us a statistician of the self, taking pleasure and finding reassurance in constant data triage. As with the quantified self movement, the technical ability for devices to collect and transmit data — what makes them “smart” — is its own achievement, the accumulation of data is represented as its own reward. “In a decade, every piece of apparel you buy will have some sort of biofeedback sensors built in it,” the co-founder of OMsignal told Nick Bilton, a New York Times technology columnist. Bilton notes that “many challenges must be overcome first, not the least of which is price.” But convincing people they need a shirt that can record their heart rate is apparently not one of these challenges.

Vessyl, a $199 drinking cup Valleywag’s Sam Biddle mockingly (and accurately) calls “a 13-ounce, Bluetooth-enabled, smartphone-syncing, battery-powered supercup,” analyzes the contents of whatever you put in it and tracks your hydration, calories, and the like in an app. There is not much reason to use Vessyl, beyond a fetish of the act of measurement. Few people see such a knowledge deficit about what they are drinking that they feel they should carry an expensive cup with them at all times. But that has not stopped Vessyl from being written up repeatedly in the press. Wired called Vessyl “a fascinating milestone … a peek into some sort of future.”

But what kind of future? And do we want it? The Internet of Things may require more than the usual dose of high-tech consumerist salesmanship, because so many of these devices are patently unnecessary. The improvements they offer to consumers — where they exist — are incremental, not revolutionary and always come at some cost to autonomy, privacy, or security. Between stories of baby monitors being hacked, unchecked backdoors, and search engines like Shodan, which allows one to crawl through unsecured, Internet-connected devices, from traffic lights to crematoria, it’s bizarre, if not disingenuous, to treat the ascension of the Internet of Things as foreordained progress.

As if anticipating this gap between what we need and what we might be taught to need, industry executives have taken to the IoT with the kind of grandiosity usually reserved for the Singularity. Their rhetoric is similarly eschatological. “Only one percent of things that could have an IP address do have an IP address today,” said Padmasree Warrior, Cisco’s chief technology and strategy officer, “so we like to say that 99 percent of the world is still asleep.” Maintaining the revivalist tone, she proposed, “It’s up to our imaginations to figure out what will happen when the 99 percent wakes up.”

Warrior’s remarks highlight how consequential marketing, advertising, and the swaggering keynotes of executives will be in creating the IoT’s consumer economy. The world will not just be exposed to new technologies; it will be woken up, given the gift of sight, with every conceivable object connected to the network. In the same way, Nest CEO Tony Fadell, commenting on his company’s acquisition by Google, wrote that his goal has always been to create a “conscious home” — “a home that is more thoughtful, intuitive.”

On a more prosaic level, “smart” has been cast as the logical, prudent alternative to dumb. Sure, we don’t need toothbrushes to monitor our precise brushstrokes and offer real-time reports, as the Bluetooth-enabled, Kickstarter-funded toothbrush described in a recent article in The Guardian can. There is no epidemic of tooth decay that could not be helped by wider access to dental care, better diet and hygiene, and regular flossing. But these solutions are so obvious, so low-tech and quotidian, as to be practically banal. They don’t allow for the advent of an entirely new product class or industry. They don’t shimmer with the dubious promise of better living through data. They don’t allow one to “transform otherwise boring dental hygiene activities into a competitive family game.” The presumption that 90 seconds of hygiene needs competition to become interesting and worth doing is among the more pure distillations of contemporary capitalism. Internet of Things devices, and the software associated with them, are frequently gamified, which is to say that they draw us into performances of productivity that enrich someone else.

In advertising from AT&T and others, the new image of the responsible homeowner is an informationally aware one. His house is always accessible and transparent to him (and to the corporations, backed by law enforcement, providing these services). The smart home, in turn, has its own particular hierarchy, in which the manager of the home’s smart surveillance system exercises dominance over children, spouses, domestic workers, and others who don’t have control of these tools and don’t know when they are being watched. This is being pushed despite the fact that violent crime has been declining in the United States for years, and those who do suffer most from crime — the poor — aren’t offered many options in the Internet of Things marketplace, except to submit to networked CCTV and police data-mining to determine their risk level.

But for gun-averse liberals, ensconced in low-crime neighborhoods, smart-home and digitized home-security platforms allow them to act out their own kind of security theater. Each home becomes a techno-castle, secured by the surveillance net.

The surveillance-laden house may rob children of essential opportunities for privacy and personal development. One AT&T video, for instance, shows a middle-aged father woken up in bed by an alert from his security system. He grabs his tablet computer and, sotto voce, tells his wife that someone’s outside. But it’s not an intruder, he says wryly. The camera cuts to shows a teenage girl, on the tail end of a date, talking to a boy outside the home. Will they or won’t they kiss? Suddenly, a garish bloom of light: the father has activated the home’s outdoor lights. The teens realize they are being monitored. Back in the master bedroom, the parents cackle. To be unmonitored is to be free — free to be oneself and to make mistakes. A home ringed with motion-activated lights, sensors, and cameras, all overseen by imperious parents, would allow for little of that.

In the conventional libertarian style, the Internet of Things offloads responsibilities to individuals, claiming to empower them with data, while neglecting to address collective, social issues. And meanwhile, corporations benefit from the increased knowledge of consumers’ habits, proclivities, and needs, even learning information that device owners don’t know themselves.

Tech industry doyen Tim O’Reilly has predicted that “insurance is going to be the native business model for the Internet of Things.” To enact this business model, companies will use networked devices to pull more data on customers and employees and reward behavior accordingly, as some large corporations, like BP, have already done in partnership with health-care companies. As the number of data sources proliferate, opportunities increase for behavioral management as well as on-the-fly price discrimination.

Through the dispersed system of mass monitoring and feedback, behaviors and cultures become standardized, directed at the algorithmic level. A British insurer called Drive Like a Girl uses in-car telemetry to track drivers’ habits. The company says that its data shows that women drive better and are cheaper to insure, so they deserve to pay lower rates. So far, perhaps, so good. Except that the European Union has instituted regulations stating that insurers can’t offer different rates based on gender, so Drive Like a Girl is using tracking systems to get around that rule, reflecting the fear of many IoT critics that vast data collection may help banks, realtors, stores, and other entities dodge the protections put in place by the Fair Credit Reporting Act, HIPPA, and other regulatory measures.

This insurer also exemplifies how algorithmic biases can become regressive social forces. From its name to its site design to how its telematics technology is implemented, Drive Like a Girl is essentializing what “driving like a girl” means — it’s safe, it’s pink, it’s happy, it’s gendered. It is also, according to this actuarial morality, a form of good citizenship. But what if a bank promised to offer loan terms to help someone “borrow like a white person,” premised on the notion that white people were associated with better loan repayments? We would call it discriminatory and question the underlying data and methodologies and cite histories of oppression and lack of access to banking services. With automated, IoT-driven marketplaces there is no room for taking into account these complex sensitivities.

As the Internet of Things expands, we may witness an uncomfortable feature creep. When the iPhone was introduced, few thought its gyroscopes would be used to track a user’s steps, sleep patterns, or heartbeat. Software upgrades or novel apps can be used to exploit hardware’s hidden capacities, not unlike the way hackers have used vending machines and HVAC systems to gain access to corporate computer networks. To that end, many smart thermostats use “geofencing” or motion sensors to detect when people are at home, which allows the device to adjust the temperature accordingly. A company, particularly a conglomerate like Google with its fingers in many networked pies, could use that information to serve up ads on other screens or nudge users towards desired behaviors. As Jathan Sadowski has pointed out here, the relatively trivial benefit of a fridge alerting you when you’ve run out of a product could be used to encourage you to buy specially advertised items. Will you buy the ice cream for which your freezer is offering a coupon? Or will you consult your health-insurance app and decide that it’s not worth the temporary spike in your premiums?

This combination of interconnectivity and feature creep makes Apple’s decision to introduce platforms for home automation and health-monitoring seem rather cunning. Cupertino is delegating much of the work to third-party device makers and programmers — just as it did with its music and app stores — while retaining control of the infrastructure and the data passing through it. (Transit fees will be assessed accordingly.) The writer and editor Matt Buchanan, lately of The Awl, has pointed out that, in shopping for devices, we are increasingly choosing among competing digital ecosystems in which we want to live. Apple seems to have apprehended this trend, but so have two other large industry groups — the Open Interconnect Consortium and the AllSeen alliance — with each offering its own open standard for connecting many disparate devices. Market competition, then, may be one of the main barriers to fulfilling the prophetic promise of the Internet of Things: to make this ecosystem seamless, intelligent, self-directed, and mostly invisible to those within it. For this vision to come true, you would have to give one company full dominion over the infrastructure of your life.

Whoever prevails in this competition to connect, well, everything, it’s worth remembering that while the smartphone or computer screen serves as an access point, the real work — the constant processing, assessment, and feedback mechanisms allowing insurance rates to be adjusted in real-time — is done in the corporate cloud. That is also where the control lies. To wrest it back, we will need to learn to appreciate the virtues of products that are dumb and disconnected once again.

Link: Free to Choose A or B

There has already been a lot written about the Facebook mood-manipulation study (here are three I found particularly useful; Tarleton Gillespie has a more extensive link collection here), and hopefully the outrage sparked by it will mark a turning point in users’ attitudes toward social-media platforms. People are angry about lots of different aspects of this study, but the main thing seems to be that Facebook distorts what users see for its own ends, as if users can’t be trusted to have their own emotional responses to what their putative friends post. That Facebook seemed to have been caught by surprise by the anger some have expressed — that people were not pleased to discover that their social lives are being treated as a petri dish by Facebook so that it can make its product more profitable — shows how thoroughly companies like Facebook see their users’ emotional reactions as their work product. How you feel using Facebook is, in the view of the company’s engineers, something they made, something that has little to do with your unique emotional sensitivities or perspective. From Facebook’s point of view, you are susceptible to coding, just like its interface. Getting you to be a more profitable user for the company is only a matter of affective optimization, a matter of tweaking your programming to get you pay more attention, spend more time on site, share more, etc.

But it turns out Facebook’s users don’t see themselves as compliant, passive consumers of Facebook’s emotional servicing, but instead had bought into the rhetoric that Facebook was a tool for communicating with their friends and family and structuring their social lives. When Facebook manipulates what users see — as they have done increasingly since the advent of its Newsfeed —  the tool becomes more and more useless for communication and becomes more of a curated entertainment product, engineered to sap your attention and suck out formatted reactions that Facebook can use to better sell audiences to advertisers. It may be that people like this product, the same way people like the local news or Transformers movies. Consumers expect those products to manipulate them emotionally. But that wasn’t part of the tacit contract in agreeing to use Facebook. If Facebook basically aspires to be as emotionally manipulative as The Fault in Our Stars, its product is much harder to sell as a means of personal expression and social connection. Facebook connects you to a zeitgeist it manufactures, not to the particular, uneven, unpredictable emotional landscape made up by your unique combination of friends. Gillespie explains this well:

social media, and Facebook most of all, truly violates a century-old distinction we know very well, between what were two, distinct kinds of information services. On the one hand, we had “trusted interpersonal information conduits” — the telephone companies, the post office. Users gave them information aimed for others and the service was entrusted to deliver that information. We expected them not to curate or even monitor that content, in fact we made it illegal to do otherwise…

On the other hand, we had “media content producers” — radio, film, magazines, newspapers, television, video games — where the entertainment they made for us felt like the commodity we paid for (sometimes with money, sometimes with our attention to ads), and it was designed to be as gripping as possible. We knew that producers made careful selections based on appealing to us as audiences, and deliberately played on our emotions as part of their design. We were not surprised that a sitcom was designed to be funny, even that the network might conduct focus group research to decide which ending was funnier (A/B testing?). But we would be surprised, outraged, to find out that the post office delivered only some of the letters addressed to us, in order to give us the most emotionally engaging mail experience.

Facebook takes our friends’ efforts to communicate with us and turns them into an entertainment product meant to make Facebook money.

Facebook’s excuse for filtering our feed is that users can’t handle the unfiltered flow of all their friends updates. Essentially, we took social media and massified it, then we needed Facebook to rescue us, restore the order we have always counted on editors, film and TV producers, A&R professionals and the like to provide for us. Our aggregate behavior, from the point of view of a massive network like Facebook’s, suggests we want to consume a distilled average out of our friends’ promiscuous sharing; that’s because from a data-analysis perspective, we have no particularities or specificity — we are just a set of relations, of likely matches and correspondences to some set of the billion other users. The medium massifies our tastes.

Facebook has incentive to make us feel like consumers of its service because that may distract us from the way in which our contributions to the network constitute unwaged labor. Choice is work, though we live in an ideological miasma that represents it as ever and always a form of freedom. In The New Way of the World, Dardot and Laval identify this as the quintessence of neoliberalist subjectivity: “life is exclusively depicted as the result of individual choices,” and the more choices we make, the more control we supposedly have over our lives. But those choices are structured not only by social contexts that exceed individual management but by entities like Facebook that become seen as part of the unchangeable infrastructure of contemporary life. “Neoliberal strategy consisted, and still consists, in constantly and systematically guiding the conduct of individuals as if they were always and everywhere engaged in relations of transaction and competition in a market,” Dardot and Laval write. Facebook has fashioned itself into a compelling implementation of that strategy. Its black-box algorithms induce and naturalize competition among users for each other’s attention, and its atomizing interface nullifies the notion of shared experience, collective subjectivity. The mood-manipulation study is a clear demonstration, as Cameron Tonkinwise noted on Twitter, that “There’s my Internet and then yours. There’s no ‘The Internet.” Everyone using Facebook see a window on reality customized for them, meant for maximal manipulation.

Not only does Facebook impose interpersonal competition under the rubric of sharing, it also imposes choice as continual A/B testing — which could be seen as the opposite of rational choice but, from the point of view of capital, it is its perfection. Without even intending it, you express a preference that has already been translated into useful market data to benefit a company, which is, of course, the true meaning of “rational”: profitable. You assume th erisks involved in the choice without realizing it. Did Facebook’s peppering your feed with too much happiness make you incredibly depressed? Who cares? Facebook got the information it sought from your response within the site.

A/B testing, the method used in the mood-manipulation study, is a matter of slotting consumers into control groups without telling them and varying some key variables to see if it instigates sales or prompts some other profitable behavior. It is a way of harvesting users’ preferences as uncompensated market research. A/B testing enacts an obligation to choose by essentially choosing for you and tracking how you respond to your forced choice. It lays bare the phoniness of the rhetoric of consumer empowerment through customization — in the end companies like Facebook treat choice not as an expression of autonomy but as a product input that can be voluntary or forced, and the meaning of choice is not your pleasure but the company’s profit. If your preferences about Facebook’s interface compromise its profitability, you will be forced to make different choices and reap what “autonomy” you can from those.

That would seem to run against the neoliberal strategy of using subjects’ consciousness of “free” choice to control them. But as Laval and Dardot point out, “the expansion of evaluative technology as a disciplinary mode rests on the fact that the more individual calculators are supposed to be free to choose, the more they must be monitored and evaluated to obviate their fundamental opportunism and compel them to identify their interests with the organizations employing them.” Hopefully the revelation of the mood-manipulation study will remind everyone that Facebook employs its users in the guise of catering to them.

Link: Google and the Trolley Problem

I expect that in a few years autonomous cars will not only be widely used but they will be mandatory. The vast majority of road accidents are caused by driver error, and when we see how much deaths and injury can be reduced by driverless cars we will rapidly decide that humans should not be allowed to be left in charge.

This gives rise to an interesting philosophical challenge. Somewhere in Mountain View, programmers are grappling with writing the algorithms that will determine the behaviour of these cars. These algorithms will decide what the car will do when the lives of the passengers in the car, pedestrians and other road users are at risk.

In 1942, the science fiction author Isaac Asimov proposed Three Laws of Robotics. These are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

If the cars obey the Three Laws, then the algorithm cannot by action or inaction put the interests of the car above the interests of a human. But what if there are choices to be made between the interests of different people?

In 1967, the philosopher Philippa Foot posed what became known at “The Trolley Problem”.  Suppose you are the driver of a runaway tram (or “trolley car”) and you can only steer from one narrow track on to another; five men are working on the track you are on, and there is one man on the other; anyone on the track that the tram enters is bound to be killed. Should you allow the tram to continue on its current track and plough into the five people, or do you deliberately steer the tram onto the other track, so leading to the certain death of the other man?

Being a utilitarian, I find the trolley problem straightforward. It seems obvious to me that the driver should switch tracks, saving five lives at the cost of one. But many people do not share that intuition: for them, the fact that switching tracks requires an action by the driver makes it more reprehensible than allowing five deaths to happen through inaction.

If it were a robot in the drivers’ cab, then Asimov’s Three Laws wouldn’t tell the robot what to do. Either way, humans will be harmed, whether by action (one man) or inaction (five men).  So the First Law will inevitably be broken. What should the robot be programmed to do when it can’t obey the First Law?

This is no longer hypothetical: an equivalent situation could easily arise with a driverless car. Suppose a group of five children runs out into the road, and the car calculates that they can be avoided only by mounting the pavement, and killing a single pedestrian walking there.  How should the car be programmed to respond?

There are many variants on the Trolley Problem (analysed by Judith Jarvis Thompson), most of which will have to be reflected in the cars’ algorithms one way or another. For example, suppose a car finds on rounding a corner that it must either drive into an obstacle, leading to the certain death of its single passenger (the car owner), or it must swerve, leading to the death of an unknown pedestrian.  Many human drivers would instinctively plough into the pedestrian to save themselves. Should the car mimic the driver and put the interests of its owner first? Or should it always protect the interests of the stranger? Or should it decide who dies at random?  (Would you a buy a car programmed to put the interests of strangers ahead of the passenger, other things being equal?)

One option is to let the market decide: I can buy a utilitarian car, while you might prefer the deontological model.  Is it a matter of religious freedom to let people drive a car whose alogorithm reflects their ethical choices?

Perhaps the normal version of the car will be programmed with an algorithm that protects everyone equally and display advertisements to the passengers; while wealthy people will be able to buy the ‘premium’ version that protects its owner at the expense of other road users.  (This is not very different to choosing to drive an SUV, which protects the people inside the car at the expense of the people outside it.)

A related set of problems arise with the possible advent of autonomous drones to be used in war, in which weapons are not only pilotless but deploy their munitions using algorithms rather than human intervention. I think it possible that autonomous drones will eventually make better decisions than soldiers – they are less like to act in anger, for example – but the algorithms which they use will also require careful scrutiny.

Asimov later added Law Zero to his Three Laws: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” This deals with one variant on the Trolley Problem (“Is it right to kill someone to save the rest of humanity?”).  But it doesn’t answer the basic Trolley Problem, in which humanity is not at stake.  I suggest a more general Law Zero, which is consistent with Asimov’s version but which provides answers to a wider range of problems: “A robot must by action or inaction do the greatest good to the greatest number of humans, treating all humans, present and future, equally”.  Other versions of Law Zero would produce different results.

Whatever we decide, we will need to decide soon. Driverless cars are already on our streets.  The Trolley Problem is no longer purely hypothetical, and we can’t leave it to Google to decide. And perhaps getting our head around these questions about the algorithms for driverless cars will help establish some principles that will have wider application in public policy.

Link: The Melancholy of Subculture Society

If you crack open some of the mustier books about the Internet—you know the ones I’m talking about, the ones which invoke Roland Barthes and discuss the sexual transgressing of MUDs—one of the few still relevant criticisms is the concern that the Internet by uniting small groups will divide larger ones.

Surfing alone

You may remember this as the Bowling Alone thesis applied to the Internet; it got some traction in the late 1990s. The basic idea is: electronic entertainment devices grows in sophistication and inexpensiveness as the years pass, until by the 1980s and 1990s, they have spread across the globe and have devoured multiple generations of children; these devices are more pernicious than traditional geeky fares inasmuch as they are often best pursued solo. Spending months mastering Super Mario Brosall alone—is a bad way to grow up normal.

And then there were none

The 4 or 5 person Dungeons & Dragons party (with a dungeon master) gives way to the classic arcade with its heated duels and oneupsmanship; the arcade gives way to the flickering console in the bedroom with one playing Final Fantasy VIIalone. The increased graphical realism, the more ergonomic controllers, the introduction of genuinely challenging AI techniques… Trend after trend was rendering a human opponent unnecessary. And gamer after gamer was now playing alone.

Perhaps, the critic says, the rise of the Internet has ameliorated that distressing trend—the trends favored no connectivity at first, but then there was finally enough surplus computing power and bandwidth for massive connectivity to become the order of the day.

It is much more satisfactory and social to play MMORPGs on your PC than single-player RPGS, much more satisfactory to kill human players in Halo matches than alien AIs. The machines finally connect humans to humans, not human to machine. We’re forced to learn some basic social skills, to maintain some connections. We’re no longer retreating into our little cocoons, interacting with no humans.

Welcome to the N.H.K.!

But, the critic continues, things still are not well. We are still alienated from one another. The rise of the connected machines still facilitates withdrawal and isolation. It presents the specter of the hikikomorithe person who ceases to exist in the physical realm as much as possible. It is a Japanese term, of course. They are 5 years further in our future than we are (or perhaps one should say, were). Gibson writes, back in 2001:

The Japanese seem to the rest of us to live several measurable clicks down the time line. The Japanese are the ultimate Early Adopters, and the sort of fiction I write behooves me to pay serious heed to that. If you believe, as I do, that all cultural change is essentially technologically driven, you pay attention to the Japanese. They’ve been doing it for more than a century now, and they really do have a head start on the rest of us, if only in terms of what we used to call future shock (but which is now simply the one constant in all our lives).

Gibson also discusses the Mobile Girl and text messaging; that culture began really showing up in America around 2005—SidekicksTwitter etc. You can do anything with a cellphone: order food, do your job, read & write novels, maintain a lively social life, engage in social status envy (She has a smaller phone, and a larger collection of collectibles on her cellphone strap! OMG!)… Which is just another way of saying You can do anything without seeing people, just by writing digital messages. (And this in a country with one of the most undigitizable writing systems in existence!)

The hikikomori withdraws from all personal contact. The hikikomori does not hang out at the local pub, swilling down the brewskis as everyone cheers on the home team. The hikikomori is not gossiping at the rotary club nor with the Lions or mummers or Veterans or Knights. hikikomoris do none of that. They aren’t working, they aren’t hanging out with friends.

The Paradoxical solitude and omnipotence of the otaku, the new century’s ultimate enthusiast: the glory and terror inherent of the absolute narrowing of personal bandwidth. —William Gibson, Shiny balls of Mud (TATE 2002)

So what are they doing with their 16 waking hours a day?

Opting out

But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing…the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode. We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way up close and personal profiles of professional athletes strain so hard to find evidence of a rounded human life—outside interests and activities, values beyond the sport. We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small…[Tennis player Michael] Joyce is, in other words, a complete man, though in a grotesquely limited way…Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well. — David Foster WallaceThe String Theory (July 1996 Esquire)

They’re not preoccupied with our culture - they’re participating in their own subculture. It’s the natural progression of the otaku. They are fighting on Azeroth, or fiercely pursuing their dojinshi career, or… There are many subcultures linked and united by the Internet, for good and ill. For every charitable or benevolent subculture (eg. free software) there is one of mixed benefits (World of Warcraft), and one outright harmful (ex. fans of eating disorders, child pornography).

The point the critic wants to make is that life is short and a zero-sum game. You lose a third of the day to sleep, another third to making a living, and now you’ve little left. To be really productive, you can’t divide your energies across multiple cultures—you can’t be truly successful in mainstream culture, and at the same time be able to devote enough effort in the field of, say, mechanical models, to be called an Otaking. A straddler takes onto his head the overhead of learning and participating in both, and receives no benefits (he will suffer socially in the esteem of the normals, and will be able to achieve little in his hobby due to lack of time and a desire to not go overboard).

The otaku & hikikomori recognizes this dilemma and he chooses—to reject normal life! He rejects life in the larger culture for his subculture. It’s a simple matter of comparative advantage; it’s easier to be a big fish in a small pond than in a large one.

The bigger screen

Have you ever woken up from a dream that was so much more pleasant than real life that you wish you could fall back to sleep and return to the dream?…For some, World of Warcraft is like a dream they don’t have to wake up from - a world better than the real world because their efforts are actually rewarded –[Half Sigma, Status, masturbation, wasted time, and WoW]

EVE Online is unique in gaming in that we have always played on the same massive server in the same online universe since May 2003 when it first went live. We not only understand the harsh penalties for failure, but also how longevity and persistence is rewarded with success. When you have over 60,000 people on weekends dealing, scheming, and shooting each other it attracts a certain type of gamer. It’s not a quick fix kind of game. We enjoy building things that last, be they virtual spaceships or real life friendships that together translate into massive Empires and enduring legacies. Those of us who play understand that one man really can truly make a difference in our world. –Mark Seleene HeardVile Rat eulogy 2012

As ever more opt out, the larger culture is damaged. The culture begins to fragment back into pieces. The disconnect can be profound; an American anime geek has more in common with a Japanese anime geek (who is of a different ethnicity, a different culture, a different religion, a different language…) than he does with an American involved in the evangelical Christian subculture. There is essentially no common ground - our 2 countrymen probably can’t even agree on objective matters like governance or evolution!

With enough of these gaps, where is American or French culture? Such cultural identities take centuries to coalesce—France did not speak French until the 1900s (as The Discovery of France recounts), and Han China is still digesting and assimilating its many minorities and outlying regions. America, of course, had it relatively easy in starting with a small founder population which could just exterminate the natives.

The national identity fragments under the assault of burgeoning subcultures. At last, the critic beholds the natural endpoint of this process: the nation is some lines on a map, some laws you follow. No one particularly cares about it. The geek thinks, Meh: here, Canada, London, Japan, Singapore—as long as FedEx can reach me and there’s a good Internet connection, what’s the difference? (Nor are the technically-inclined alone in this.)

You can test this yourself. Tell yourself—the country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled. If your mental reply goes something like, Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest…, then you are headed down the path of opting out.

This is how the paradox works: the Internet breaks the larger culture by letting members flee to smaller subcultures. And the critics think this is bad. They like the broader culture, and deep down, furries and latex fetishists really bother them. They just plain don’t like those weirdo deviants.

But I can get a higher score!

In the future, everyone will be world-famous for 15 minutes.

Let’s look at another angle.

Monoculture

Irony has only emergency use. Carried over time, it is the voice of the trapped who have come to enjoy their cage.

One can’t opt out of culture. There is no view from nowhere. To a great extent, we are our cultural artifacts—our possessions, our complexes of memes, our habits and objects of disgust are all cultural. You are always part of a culture.

Suppose there were only 1 worldwide culture, with no subcultures. The overriding obsession of this culture will be… let’s make it money. People are absolutely obsessed with money—how it is made, acquired, degraded, etc. More importantly, status is defined just by how much you have earned in your life; in practice, tie-breakers include how fast you made it, what circumstances you made it in (everyone admires a person who became a billionaire in a depression more than a good-times billionaire, in the same way we admire the novelist in the freezing garret more than the comfortable academic), and so on.

This isn’t too absurd a scenario: subjects feed on themselves and develop details and complexity as effort is invested in them. Money could well absorb the collective efforts of 7 billion people—already many people act just this way.

But what effect does this have on people? I can tell you: the average person is going to be miserable. If everyone genuinely buys into this culture, then they have to be. Their talents at piano playing, or cooking, or programming, or any form of artistry or scholarly pursuit are denigrated and count for naught. The world has become too big - it did not use to be so big, people so powerless of what is going on:

"Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole.

Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….”

You can’t make a mark on it unless there are almost as many ways to make marks as there are persons.

To put it another way: women suffer enough from comparing themselves to media images. If you want a vision of this future, imagine everyone being an anorexic teenager who hates her body—forever.

We all value social esteem. We need to know somebody thinks well of us. We’re tribal monkeys; ostracism means death.

Jaron Lanier: I’d like to hypothesize one civilizing force, which is the perception of multiple overlapping hierarchies of status. I’ve observed this to be helpful in work dealing with rehabilitating gang members in Oakland. When there are multiple overlapping hierarchies of status there is more of a chance of people not fighting their superior within the status chain. And the more severe the imposition of the single hierarchy in people’s lives, the more likely they are to engage in conflict with one another. Part of America’s success is the confusion factor of understanding how to assess somebody’s status.

Steven Pinker: That’s a profound observation. There are studies showing that violence is more common when people are confined to one pecking order, and all of their social worth depends on where they are in that hierarchy, whereas if they belong to multiple overlapping groups, they can always seek affirmations of worth elsewhere. For example, if I do something stupid when I’m driving, and someone gives me the finger and calls me an asshole, it’s not the end of the world: I think to myself, I’m a tenured professor at Harvard. On the other hand, if status among men in the street was my only source of worth in life, I might have road rage and pull out a gun. Modernity comprises a lot of things, and it’s hard to tease them apart. But I suspect that when you’re not confined to a village or a clan, and you can seek your fortunes in a wide world, that is a pacifying force for exactly that reason.

Think of the people you know. How many of them can compete on purely financial grounds? How many can compare to the chimps at the top of the financial heap without feeling like an utter failure, a miserable loser? Not many. I can’t think of anyone I know who wouldn’t be at least a little unhappy. Some of them are pretty well off, but it’s awfully hard to compare with billionaires in their department. There’s no way to prove that this version of subcultures is the right one (perhaps fragmenting the culture fragments the possible status), but when I look at simple models, this version seems plausible to me and to explain some deep trends like monogamy.

Subcultures set you free

If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself. Enjoy your achievements as well as your plans. Keep interested in your own career, however humble; it is a real possession in the changing fortunes of time.

Having a society in which an artist can mingle as social equals with the billionaire and admit the Nobel scientists and the philanthropist is fundamental to our mental health! If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand.

It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and a cat may look at a king.

Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.

(Source: sunrec)

Technological idolatry is the most ingenuous and primitive of the three [higher forms of idolatry]; for its devotees (…) believe that their redemption and liberation depend upon material objects - in this case gadgets. Technological idolatry is the religion whose doctrines are promulgated, explicitly or by implication, in the advertisement pages of our newspapers and magazines - the source, we may add parenthetically, from which millions of men, women and children in the capitalistic countries derive their working philosophy of life. (…) So whole-hearted is the modern faith in technological idols that (despite all the lessons of mechanized warfare) it is impossible to discover in the popular thinking of our time any trace of the ancient and profoundly realistic doctrine of hubris and inevitable nemesis. There is a very general belief that, where gadgets are concerned, we can get something for nothing - can enjoy all the advantages of an elaborate, top-heavy and constantly advancing technology without having to pay for them by any compensating disadvantages.
— Aldous Huxley, The Perennial Philosophy

Link: Technology and Consumership

Today’s media, combined with the latest portable devices, have pushed serious public discourse into the background and hauled triviality to the fore, according to media theorist Arthur W Hunt. And the Jeffersonian notion of citizenship has given way to modern consumership.

Almantas Samalavicius: In your recently published book Surviving Technopolis, you discuss a number of important and overlapping issues that threaten the future of societies. One of the central themes you explore is the rise, dominance and consequences of visual imagery in public discourse, which you say undermines a more literate culture of the past. This tendency has been outlined and questioned by a large and growing number of social thinkers (Marshall McLuhan, Walter Ong, Jacques Ellul, Ivan Illich, Neil Postman and others). What do you see as most culturally threatening in this shift to visual imagery?

Arthur W. Hunt III: The shift is technological and moral. The two are related, as Ellul has pointed out. Computer-based digital images stem from an evolution of other technologies beginning with telegraphy and photography, both appearing in the middle of the nineteenth century. Telegraphy trivialized information by allowing it to come to us from anywhere and in greater volumes. Photography de-contextualized information by giving us an abundance of pictures disassociated from the objects from which they came. Cinema magnified Aristotle’s notion of spectacle, which he claimed to be the least artistic element in Poetics. Spectacle in modern film tends to diminish all other elements of drama (plot, character, dialogue and so on) in favour of the exploding Capitol building. Radio put the voice of both the President and the Lone Ranger into our living rooms. Television was the natural and powerful usurper of radio and quickly became the nucleus of the home, a station occupied by the hearth for thousands of years. Then the television split in two, three or four ways so that every house member had a set in his or her bedroom. What followed was the personal computer at both home and at work. Today we have portable computers in which we watch shows, play games, email each other and gaze at ourselves like we used to look at Hollywood stars. To a large extent, these technologies are simply extensions of our technological society. They act as Sirens of distraction. They push serious public discourse into the background and pull triviality to the foreground. They move us away from the Jeffersonian notion of citizenship, replacing it with modern capitalism’s ethic of materialistic desire or “consumership”. The great danger of all this, of course, is that we neglect the polis and, instead, waste our time with bread and circuses. Accompanying this neglect is the creation of people who spend years in school yet remain illiterate, at least by the standards we used to hold out for a literate person. The trivialization spreads out into other institutions, as Postman has argued, to schools, churches and politics. This may be an American phenomenon, but many countries look to America’s institutions for guidance.

AS: Philosopher and historian Ivan Illich – one of the most radical critics of modernity and its mythology – has emphasized the conceptual difference between tools, on one hand, and technology on the other, implying that the dominance and overuse of technology is socially and culturally debilitating. Economist E.F. Schumacher urged us to rediscover the beauty of smallness and the use of more humane, “intermediate technologies”. However, a chorus of voices seems to sink in the ocean of popular technological optimism and a stubborn self-generating belief in the power of progress. Your critique contains no call to go back to the Middle Ages. Nor do you suggest that we give anything away to technological advances. Rather, you offer a sound and balanced argument about the misuses of technology and the mindscape that sacrifices tradition and human relationships on the altar of progress. Do you see any possibility of developing a more balanced approach to the role of technology in our culture? Obviously, many are aware, even if cynically, that technological progress has its downsides, but what of its upsides?

AWH: Short of a nuclear holocaust, we will not be going back to the Middle Ages any time soon. Electricity and automobiles are here to stay. The idea is not to be anti-technology. Neil Postman once said to be anti-technology is like being anti-food. Technologies are extensions of our bodies, and therefore scale, ecological impact and human flourishing becomes the yardstick for technological wisdom. The conventional wisdom of modern progress favours bigger, faster, newer and more. Large corporations see their purpose on earth to maximize profits. Their goal is to get us addicted to their addictions. We can no longer afford this kind of wisdom, which is not wisdom at all, but foolishness. We need to bolster a conversation about the human benefits of smaller, slower, older and less. Europeans often understand this better than Americans, that is, they are more conscious of preserving living spaces that are functional, aesthetically pleasing and that foster human interaction. E.F. Schumacher gave us some useful phraseology to promote an economy of human scale: “small is beautiful,” “technologies with a human face” and “homecomers.” He pointed out that “labour-saving machinery” is a paradoxical term, not only because it makes us unemployed, but also because it diminishes the value of work. Our goal should be to move toward a “third-way” economic model, one of self-sufficient regions, local economies of scale, thriving community life, cooperatives, family owned farms and shops, economic integration between the countryside and the nearby city, and a general revival of craftsmanship. Green technologies – solar and wind power for example – actually can help us achieve this third way, which is actually a kind of micro-capitalism.

AS: Technologies developed by humans (e.g. television) continue to shape and sustain a culture of consumerism, which has now become a global phenomenon. As you insightfully observe in one of your essays, McLuhan, who was often misinterpreted and misunderstood as a social theorist hailed by the television media he explored in a great depth, was fully aware of its ill effects on the human personality and he therefore limited his children’s TV viewing. Jerry Mander has argued for the elimination of television altogether, nevertheless, this medium is alive and kicking and continues to promote an ideology of consumption and, what is perhaps most alarming, successfully conditioning children to become voracious consumers in a society where the roles of parents become more and more institutionally limited. Do you have any hopes for this situation? Can one expect that people will develop a more critical attitude toward these instruments, which shape them as consumers? Does social criticism of these trends play any role in an environment where the media and the virtual worlds of the entertainment industry have become so powerful?

AWH: Modern habits of consumption have created what Benjamin Barber calls an “ethos of infantilization”, where children are psychologically manipulated into early adulthood and adults are conditioned to remain in a perpetual state of adolescence. Postman suggested essentially the same thing when he wroteThe Disappearance of Childhood. There have been many books written that address the problems of electronic media in stunting a child’s mental, physical and spiritual development. One of the better recent ones is Richard Louv’s Last Child in the Woods. Another one is Anthony Esolen’s Ten Ways to Destroy the Imagination of Your Child. We have plenty of books, but we don’t have enough people reading them or putting them into practice. Raising a child today is a daunting business, and maybe this is why more people are refusing to do it. No wonder John Bakan, a law professor at the University of British Columbia, wrote a New York Times op-ed complaining, “There is reason to believe that childhood itself is now in crisis.” The other day I was listening to the American television program 60 Minutes. The reporter was interviewing the Australian actress Cate Blanchett. I almost fell out of my chair when she starkly told the reporter, “We don’t outsource our children.” What she meant was, she does not let someone else raise her children. I think she was on to something. In most families today, both parents work outside the home. This is a fairly recent development if you consider the entire span of human history. Industrialism brought an end to the family as an economic unit. First, the father went off to work in the factory. Then, the mother entered the workforce during the last century. Well, the children could not stay home alone, so they were outsourced to various surrogate institutions. What was once provided by the home economy (oikos) – education, heath care, child rearing and care of the elderly – came to be provided by the state. The rest of our needs – food, clothing, shelter and entertainment – came to be provided by the corporations. A third-way economic ordering would seek to revive the old notion of oikos so that the home can once again be a legitimate economic, educational and care-providing unit – not just a place to watch TV and sleep. In other words, the home would once again become a centre for production, not just consumption. If this every happened, one or both parents would be at home and little Johnny and sister Jane would work and play alongside their parents.

AS: I was intrigued by your insight into forms of totalitarianism depicted by George Orwell and Aldous Huxley. Though most authors who discussed totalitarianism during the last half of the century were overtaken by the Orwellian vision and praised this as most enlightening, the alternative Huxleyan vision of a self-inflicted, joyful and entertaining totalitarian society was far less scrutinized. Do you think we are entering into a culture where “totalitarianism with a happy face” as you call it prevails? If so, what consequences you foresee?

AWH: It is interesting to note that Orwell thought Huxley’s Brave New Worldwas implausible because he maintained that hedonistic societies do not last long, and that they are too boring. However, both authors were addressing what many other intellectuals were debating during the 1930s: what would be the social implications of Darwin and Freud? What ideology would eclipse Christianity? Would the new social sciences be embraced with as much exuberance as the hard sciences? What would happen if managerial science were infused into all aspects of life? What should we make of wartime propaganda? What would be the long-term effects of modern advertising? What would happen to the traditional family? How could class divisions be resolved? How would new technologies shape the future?

I happen to believe there are actually more similarities between the Orwell’s 1984 and Huxley’s Brave New World than there are differences. Both novels have as their backstory the dilemma of living with weapons of mass destruction. The novel 1984 imagines what would happen if Hitler succeeded. In Brave New World, the world is at a crossroads. What is it to be, the annihilation of the human race or world peace through sociological control? In the end, the world chooses a highly efficient authoritarian state, which keeps the masses pacified by maintaining a culture of consumption and pleasure. In both novels, the past is wiped away from public memory. In Orwell’s novel, whoever “controls the past controls the future.” In Huxley’s novel, the past has been declared barbaric. All books published before A.F. 150 (that is, 150 years after 1908 CE, the year the first Model T rolled off the assembly line) are suppressed. Mustapha Mond, the Resident Controller in Brave New World, declares the wisdom of Ford: “History is bunk.” In both novels, the traditional family has been radically altered. Orwell draws from Hitler Youth and the Soviets Young Pioneers to give us a society where the child’s loyalty to the state far outweighs any loyalty to parents. Huxley gives us a novel where the biological family does not even exist. Any familial affection is looked down upon. Everybody belongs to everybody, sexually and otherwise. Both novels give us worlds where rational thought is suppressed so that “war is peace”, “freedom is slavery” and “ignorance is strength” (1984). InBrave New World, when Lenina is challenged by Marx to think for herself, all she can say is “I don’t understand.” The heroes in both novels are malcontents who want to escape this irrationality but end up excluded from society as misfits. Both novels perceive humans as religious beings where the state recognizes this truth but channels these inclinations toward patriotic devotion. In1984, Big Brother is worshipped. In Brave New World, the Christian cross has been cut off at the top to form the letter “T” for Technology. When engaged in the Orgy-Porgy, everyone in the room chants, “Ford, Ford, Ford.” In both novels an elite ruling class controls the populace by means of sophisticated technologies. Both novels show us surveillance states where the people are constantly monitored. Sound familiar? Certainly, as Postman tells us in his foreword to Amusing Ourselves to Death, Huxley’s vision eerily captures our culture of consumption. But how long would it take for a society to move from a happy faced totalitarianism to one that has a mask of tragedy?

AS: Your comments on the necessity of the third way in our societies subjected to and affected by economic globalization seem to resonate with the ideas of many social thinkers I interviewed for this series. Many outstanding social critics and thinkers seem to agree that the notions of communism and capitalism have become stale and meaningless; further development of these paradigms lead us nowhere. One of your essays focuses on the old concept of “shire” and household economics. Do you believe in what Mumford called “the useful past”? And do you expect the growing movement that might be referred to as “new economics” to enter the mainstream of our economic thinking, eventually leading to changes in our social habits?

AWH: If the third way economic model ever took hold, I suppose it could happen in several ways. We will start with the most desirable way, and then move to less desirable. The most peaceful way for this to happen is for people to come to some kind of realization that the global economy is not benefiting them and start desiring something else. People will see that their personal wages have been stagnant for too long, that they are working too hard with nothing to show for it, that something has to be done about the black hole of debt, and that they feel like pawns in an incomprehensible game of chess. Politicians will hear their cries and institute policies that would allow for local economies, communities and families to flourish. This scenario is less likely to happen, because the multinationals that help fund the campaigns of politicians will not allow it. I am primarily thinking of the American reality in my claim here. Unless corporations have a change of mind, something akin to a religious conversion, we will not see them open their hearts and give away their power.

A more likely scenario is that a grassroots movement led by creative innovators begins to experiment with new forms of community that serve to repair the moral and aesthetic imagination distorted by modern society. Philosopher Alasdair MacIntyre calls this the “Benedict Option” in his book After Virtue. Morris Berman’s The Twilight of American Culture essentially calls for the same solution. Inspired by the monasteries that preserved western culture in Europe during the Dark Ages, these communities would serve as models for others who are dissatisfied with the broken dreams associated with modern life. These would not be utopian communities, but humble efforts of trial and error, and hopefully diverse according to the outlook of those who live in them. The last scenario would be to have some great crisis occur – political, economic, or natural in origin – that would thrust upon us the necessity reordering our institutions. My father, who is in his nineties, often reminisces to me about the Great Depression. Although it was a miserable time, he speaks of it as the happiest time in his life. His best stories are about neighbours who loved and cared for each other, garden plots and favourite fishing holes. For any third way to work, a memory of the past will become very useful even if it sounds like literature. From a practical point of view, however, the kinds of knowledge that we will have to remember will include how to build a solid house, how to plant a vegetable garden, how to butcher a hog and how to craft a piece of furniture. In rural Tennessee where I live, there are people still around who know how to do these things, but they are a dying breed.

AS: The long (almost half-century) period of the Cold War has resulted in many social effects. The horrors of Communist regimes and the futility of state-planned economics, as well as the treason of western intellectuals who remained blind to the practice of Communist powers and eschewed ideas of idealized Communism, have aided the ideology of capitalism and consumerism. Capitalism came to be associated with ideas of freedom, free enterprise, freedom to choose and so on. How is this legacy burdening us in the current climate of economic globalization? Do you think recent crises and new social movements have the potential to shape a more critical view (and revision) of capitalism and especially its most ugly neo-liberal shape?

AWH: Here in America liberals want to hold on to their utopian visions of progress amidst the growing evidence that global capitalism is not delivering on its promises. Conservatives are very reluctant to criticize the downsides of capitalism, yet they are not really that different in their own visions of progress in comparison to liberals. It was amusing to hear the American politician Sarah Palin describe Pope Francis’ recent declarations against the “globalization of indifference” as being “a little liberal.” The Pope is liberal? While Democrats look to big government to save them, Republicans look to big business. Don’t they realize that with modern capitalism, big government and big business are joined at the hip? The British historian Hilarie Belloc recognized this over a century ago, when he wrote about the “servile state,” a condition where an unfree majority of non-owners work for the pleasure of a free minority of owners. But getting to your question, I do think more people are beginning to wake up to the problems associated with modern consumerist capitalism. A good example of this is a recent critique of capitalism written by Daniel M. Bell, Jr. entitled The Economy of Desire: Christianity and Capitalism in a Postmodern World. Here is a religious conservative who is saying the great tempter of our age is none other than Walmart. The absurdist philosopher and Nobel Prize winner Albert Camus once said the real passion of the twentieth century was not freedom, but servitude. Jacques Ellul, Camus’s contemporary, would have agreed with that assessment. Both believed that the United States and the Soviet Union, despite their Cold War differences, had one thing in common – the two powers had surrendered to the sovereignty of technology. Camus’ absurdism took a hard turn toward nihilism, while Ellul turned out to be a kind of cultural Jeremiah. It is interesting to me that when I talk to some people about third way ideas, which actually is an old way of thinking about economy, they tell me it can’t be done, that we are now beyond all that, and that the our economic trajectory is unstoppable or inevitable. This retort, I think, reveals how little freedom our system possesses. So, I can’t have a family farm? My small business can’t compete with the big guys? My wife has to work outside the home and I have to outsource the raising of my children? Who would have thought capitalism would lack this much freedom?

AS: And finally are you an optimist? Jacques Ellul seems to have been very pessimistic about us escaping from the iron cage of technological society. Do you think we can still break free?

AWH: I am both optimistic and pessimistic. In America, our rural areas are becoming increasingly depopulated. I see this as an opportunity for resettling the land – those large swaths of fields and forests that encompass about three quarters of our landmass. That is a very nice drawing board if we can figure out how to get back to it. I am also optimistic about the fact that more people are waking up to our troubling times. Other American writers that I would classify as third way proponents include Wendell Berry, Kirkpatrick Sale, Rod Dreher, Mark T. Mitchell, Bill Kauffman, Joseph Pearce and Allan Carlson. There is also a current within the American and British literary tradition, which has served as a critique of modernity. G.K. Chesterton, J.R.R. Tolkien, Dorothy Day and Allen Tate represent this sensibility, which is really a Catholic sensibility, although one does not have to be Catholic to have it. I am amazed at the popularity of novels about Amish people among American evangelical women. Even my wife reads them, and we are Presbyterians! In this country, the local food movement, the homeschool movement and the simplicity movement all seem to be pointing toward a kind of breaking away. You do not have to be Amish to break away from the cage of technological society; you only have to be deliberate and courageous. If we ever break out of the cage in the West, there will be two types of people who will lead such a movement. The first are religious people, both Catholic and Protestant, who will want to create a counter-environment for themselves and their children. The second are the old-school humanists, people who have a sense of history, an appreciation of the cultural achievements of the past, and the ability to see what is coming down the road. If Christians and humanists do nothing, and let modernity roll over them, I am afraid we face what C.S. Lewis called “the abolition of man”. Lewis believed our greatest danger was to have a technological elite – what he called The Conditioners – exert power over the vast majority so that our humanity is squeezed out of us. Of course all of this would be done in the name of progress, and most of us would willingly comply. The Conditioners are not acting on behalf of the public good or any other such ideal, rather what they want are guns, gold, and girls – power, profits and pleasure. The tragedy of all this, as Lewis pointed out, is that if they destroy us, they will destroy themselves, and in the end Nature will have the last laugh.

Link: Death Stares

By Facebook’s 10th anniversary in February 2014, the site claimed well over a billion active users. Embedded among those active accounts, however, are the profiles of the dead: nearly anyone with a Facebook account catches glimpses of digital ghosts, as dead friends’ accounts flicker past in the News Feed. As users of social media age, it is inevitable that interacting with the dead will become part of our everyday mediated encounters. Some estimates claim that 30 million Facebook profiles belong to dead users, at times making it hard to distinguish between the living and the dead online. While some profiles have been “memorialized,” meaning that they are essentially frozen in time and only searchable to Facebook friends, other accounts continue on as before.

In an infamous Canadian case, a young woman’s obituary photograph later appeared in a dating website’s advertising on Facebook. Her parents were rightly horrified by this breach of privacy, particularly because her suicide was prompted by cyberbullying following a gang rape. But digital images, once we put them out into the world on social networking platforms (or just on iPhones, as recent findings about the NSA make clear), are open to circulation, reproduction, and alteration. Digital images’ meanings can change just as easily as Snapchat photographs appear and fade. This seems less objectionable when the images being shared are of yesterday’s craft cocktail, but having images of funerals and corpses escape our control seems unpalatable.

While images of death and destruction routinely bombard us on 24-hour cable news networks, images of death may make us uncomfortable when they emerge from the private sphere, or are generated for semi-public viewing on social networking websites. As I check my Twitter feed while writing this essay, a gruesome image of a 3-year-old Palestinian girl murdered by Israeli troops has well over a thousand retweets, indicating that squeamishness about death does not extend to international news events. By contrast, when a mother of four posted photographs of her body post cancer treatments, mastectomy scars fully visible, she purportedly lost over one hundred of her Facebook friends who were put off by this display. To place carefully chosen images and text on a Facebook memorial page is one thing, but to post photographs of a deceased friend in her coffin or on her deathbed is quite another. For social media users accustomed to seeing stylized profiles, images of decay cut through the illusion of curation.

In a 2009 letter to the British Medical Journal a doctor commented on a man using a mobile phone to photograph a newly dead family member, pointing out with apparent distaste that Victorian postmortem portraits “were not candid shots of an unprepared still warm body.” He wonders, “Is the comparatively covert and instant nature of the mobile phone camera allowing people to respond to stress in a way that comforts them, but society may deem unacceptable and morbid?” While the horrified doctor saw a major discrepancy between Victorian postmortem photographs and the one his patient’s family member took, Victorian images were not always pristine. Signs of decay, illness, or struggle are visible in many of the photographs. Sickness or the act of dying, too, was depicted in these photos, not unlike the practices of deathbed tweeting and illness blogging. Even famous writersand artists were photographed on their deathbeds.

Photography has always been connected to death, both in theory and practice. For Roland Barthes, the photograph is That-has-been. To take a photo of oneself, to pose and press a button, is to declare one’s thereness while simultaneously hinting at your eventual death. The photograph is always “literally an emanation of the referent” and a process of mortification, of turning a subject into an object — a dead thing. Susan Sontag claimed that all photographs are memento mori, while Eduardo Cadava said that all photographs are farewells.

The perceived creepiness of postmortem photography has to do with the uncanniness of ambiguity: Is the photographed subject alive or dead? Painted eyes and artificially rosy cheeks, lifelike positions, and other additions made postmortem subjects seem more asleep than dead. Because of its ability to materialize and capture, photography both mortifies and reanimates its subjects. Not just photography, but other container technologies like phonographs and inscription tools can induce the same effects. Digital technology is another incarnation of these processes, as social networking profiles, email accounts, and blogs become new means of concretizing and preserving affective bonds. Online profiles and digital photographs share with postmortem photographs this uncanny quality of blurring the boundaries between life and death, animate and inanimate, or permanence and ephemerality.

Sharing postmortem photos or mourning selfies on social media platforms may seem creepy, but death photos were not always politely confined to such depersonalized sources as mass media. Postmortem and mourning photography were once accepted or even expected forms of bereavement, not callously dismissed as TMI. Victorians circulated images of dead loved ones on cabinet cards or cartes de visite, even if they could not reach as wide a public audience as those who now post on Instagram and Twitter. Photography historian Gregory Batchen notes that postmortem and mourning images were “displayed in parlors or living rooms or as part of everyday attire, these objects occupied a liminal space between public and private. They were, in other words, meant to do their work over and over again, and to be seen by both intimates and strangers.”

Victorian postmortem photography captured dead bodies in a variety of positions, including sleeping, sitting in a chair, lying in a coffin, or even standing with loved ones. Thousands of postmortem and mourning images from the 19th and early 20th centuries persist in archives and private collections, some of them bearing a striking resemblance to present day images. The Thanatos Archive in Woodinville, Washington, contains thousands of mourning and postmortem images from the 19th century. In one Civil War-era mourning photograph, a beautiful young women in white looks at the camera, not dissimilar to the images of the coiffed young women on Selfies at Funerals. In another image, a young woman in black holds a handkerchief to her face, an almost exaggerated gesture of mourning that the comically excessive pouting found in many funeral selfies recalls. In an earlier daguerreotype, a young woman in black holds two portraits of presumably deceased men.

Batchen describes Victorian mourners as people who “wanted to be remembered as remembering.” Many posed while holding photographs of dead loved ones or standing next to their coffins. Photographs from the 19th century feature women dressed in ornate mourning clothes, staring solemnly at photographs of dead loved ones. The photograph and braided ornaments made from hair of the deceased acted as metonymic devices, connecting the mourner in a physical way to the absent loved one, while ornate mourning wear, ritual, and the addition of paint or collage elements to mourning photographs left material traces of loss and remembrance.

Because photographs were time-consuming and expensive to produce in the Victorian era, middle-class families reserved portraits for special events. With the high rate of childhood mortality, families often had only one chance to photograph their children: as memento mori. Childhood mortality rates in the United States, while still higher than many other industrialized nations, are now significantly lower, meaning that images of dead children are startling. For those who do lose children today, however, the service Now I Lay Me Down to Sleep produces postmortem and deathbed photographs of terminally ill children.

Memorial photography is no mere morbid remnant of a Victorian past. Through his ethnographic fieldwork in rural Pennsylvania, anthropologist Jay Ruby uncovered a surprising amount of postmortem photography practices in the contemporary U.S. Because of the stigma associated with postmortem photography, however, most of his informants expressed their desire to keep such photographs private or even secret. Even if these practices continue, they have moved underground. Unlike the arduous photographic process of the 19th century, which could require living subjects to sit disciplined by metal rods to keep them from blurring in the finished image, smartphones and digital photography allow images to be taken quickly or even surreptitiously. Rather than calling on a professional photographer’s cumbersome equipment, grieving family members can use their own devices to secure the shadows of dead loved ones. While wearing jewelry made of human hair is less acceptable now (though people do make their loved ones into cremation diamonds), we may instead use digital avenues to leave material traces of mourning.

Why did these practices disappear from public view? In the 19th century, mourning and death were part of everyday life but by the middle of the 20th century, outward signs of grief were considered pathological and most middle-class Americans shied away from earlier practices, as numerous funeral industry experts and theorists have argued. Once families washed and prepared their loved ones’ bodies for burial; now care of the dead has been outsourced to corporatized funeral homes.

This is partly a result of attempts to deal with the catastrophic losses of the First and Second World Wars, when proper bereavement included separating oneself from the dead. Influenced by Freudian psychoanalysis’s categorization of grief as pathological, psychologists from the 1920s through the 1950s associated prolonged grief with mental instability, advising mourners to “get over” loss. Also, with the advent of antibiotics and vaccines for once common childhood killers like polio, the visibility of death in everyday life lessened. The changing economy and beginnings of post-Fordism contributed to these changes as well, as care work and other forms of affective labor moved from the domicile to commercial enterprises. Jessica Mitford’s influential 1963 book, The American Way of Death, traces the movement of death care from homes to local funeral parlors to national franchises, showing how funeral directors take advantage of grieving families by selling exorbitant coffins and other death accoutrements. Secularization is also a contributing factor, as elaborate death rituals faded from public life. While death and grief reentered the public discourse in the 1960s and 1970s, the medicalization of death and growth of nursing homes and hospice centers meant that many individuals only saw dead people as prepared and embalmed corpses at wakes and open casket funerals.

Despite this, reports of a general “death taboo” have been greatly exaggerated. Memorial traces are actually everywhere, prompting American Studies scholar Erika Doss to dub this the age of “memorial mania.” Various national traumas have led to numerous memorials, both online and physical, and likewise, on social media, including tactile examples like the AIDS memorial quilt, large physical structures like the 9/11 memorial, long-standing online entities like sitesremembering Columbine, and more recent localized memorials dedicated to the dead on social networking websites.

But these types of memorials did not immediately normalize washing, burying, or photographing the body of a loved one. There’s a disconnect between the shiny and seemingly disembodied memorials on social media platforms and the presence of the corpse, particularly one that has not been embalmed or prepared.

Some recent movements in the mortuary world call for acknowledgement of the body’s decay rather than relying on disembodied forms of memorialization and remembrance. Rather than outsourcing embalmment to a funeral home, proponents of green funerals from such organizations as the Order of the Good Death and the Death Salon call for direct engagement with the dead body, learning to care for and  even bury dead loved ones at home. The Order of the Good Death advises individuals to embrace death: “The Order is about making death a part of your life. That means committing to staring down your death fears — whether it be your own death, the death of those you love, the pain of dying, the afterlife (or lack thereof), grief, corpses, bodily decomposition, or all of the above. Accepting that death itself is natural, but the death anxiety and terror of modern culture are not.”

The practices having to do with “digital media” and death that some find unsettling — including placing QR codes on headstones, using social media websites as mourning platforms, snapping photos of dead relatives on smartphones, funeral selfies, and illness blogging or deathbed tweeting— may be seen as attempts to do just that, materializing death and mourning much like Victorian postmortem photography or mourning hair jewelry. Much has been made of the loss of indexicality with digital images, which replace this physical process of emanation with flattened information, but this development doesn’t obviate the relationship between photography and death. For those experiencing loss, the ability to materialize their mourning — even in digital forms — is comforting rather than macabre.