Sunshine Recorder

Link: The Lights Are On but Nobody’s Home

Who needs the Internet of Things? Not you, but corporations who want to imprison you in their technological ecosystem

Prepare yourself. The Internet of Things is coming, whether we like it or not apparently. Though if the news coverage — the press releases repurposed as service journalism, the breathless tech-blog posts — is to be believed, it’s what we’ve always wanted, even if we didn’t know it. Smart devices, sensors, cameras, and Internet connectivity will be everywhere, seamlessly and invisibly integrated into our lives, and it will make society more harmonious through the gain of a million small efficiencies. In this vision, the smart city isn’t plagued by deteriorating infrastructure and underfunded social services but is instead augmented with a dizzying collection of systems that ensure that nothing goes wrong. Resources will be apportioned automatically, mechanics and repair people summoned by the system’s own command. We will return to what Lewis Mumford described as a central feature of the Industrial Revolution: “the transfer of order from God to the Machine.” Now, however, the machines will be thinking for themselves, setting society’s order based on the false objectivity of computation.

According to one industry survey, 73 percent of Americans have not heard of the Internet of Things. Another consultancy forecasts $7.1 trillion in annual sales by the end of the decade. Both might be true, yet the reality is that this surveillance-rich environment will continue to be built up around us. Enterprise and government contracts have floated the industry to this point: To encourage us to buy in, sensor-laden devices will be subsidized, just as smartphones have been for years, since companies can make up the cost difference in data collection.

With the Internet of Things, promises of savings and technological empowerment are being implemented as forces of social control. In Chicago, this year’s host city for Cisco’s Internet of Things World Forum, Mayor Rahm Emanuel has used Department of Homeland Security grants to expand Chicago’s surveillance-camera system into the largest in the country, while the city’s police department, drawing on an extensive database of personal information about residents, has created a “heat list” of 400 people to be tracked for potential involvement in violent crime. In Las Vegas, new streetlights can alert surrounding people to disasters; they also have the ability to record video and audio of the surrounding area and track movements. Sometime this year, Raytheon plans to launch two aerostats — tethered surveillance blimps — over Washington, D.C. In typical fashion, this technology, pioneered in the battlefields of Afghanistan and Iraq, is being introduced to address a non-problem: the threat of enemy missiles launched at our capital. When they are not on the lookout for incoming munitions, the aerostats and their military handlers will be able to enjoy video coverage of the entire metropolitan area.

The ideological premise of the Internet of Things is that surveillance and data production equal a kind of preparedness. Any problem might be solved or pre-empted with the proper calculations, so it is prudent to digitize and monitor everything.

This goes especially for ourselves. The IoT promises users an unending capability to parse personal information, making each of us a statistician of the self, taking pleasure and finding reassurance in constant data triage. As with the quantified self movement, the technical ability for devices to collect and transmit data — what makes them “smart” — is its own achievement, the accumulation of data is represented as its own reward. “In a decade, every piece of apparel you buy will have some sort of biofeedback sensors built in it,” the co-founder of OMsignal told Nick Bilton, a New York Times technology columnist. Bilton notes that “many challenges must be overcome first, not the least of which is price.” But convincing people they need a shirt that can record their heart rate is apparently not one of these challenges.

Vessyl, a $199 drinking cup Valleywag’s Sam Biddle mockingly (and accurately) calls “a 13-ounce, Bluetooth-enabled, smartphone-syncing, battery-powered supercup,” analyzes the contents of whatever you put in it and tracks your hydration, calories, and the like in an app. There is not much reason to use Vessyl, beyond a fetish of the act of measurement. Few people see such a knowledge deficit about what they are drinking that they feel they should carry an expensive cup with them at all times. But that has not stopped Vessyl from being written up repeatedly in the press. Wired called Vessyl “a fascinating milestone … a peek into some sort of future.”

But what kind of future? And do we want it? The Internet of Things may require more than the usual dose of high-tech consumerist salesmanship, because so many of these devices are patently unnecessary. The improvements they offer to consumers — where they exist — are incremental, not revolutionary and always come at some cost to autonomy, privacy, or security. Between stories of baby monitors being hacked, unchecked backdoors, and search engines like Shodan, which allows one to crawl through unsecured, Internet-connected devices, from traffic lights to crematoria, it’s bizarre, if not disingenuous, to treat the ascension of the Internet of Things as foreordained progress.

As if anticipating this gap between what we need and what we might be taught to need, industry executives have taken to the IoT with the kind of grandiosity usually reserved for the Singularity. Their rhetoric is similarly eschatological. “Only one percent of things that could have an IP address do have an IP address today,” said Padmasree Warrior, Cisco’s chief technology and strategy officer, “so we like to say that 99 percent of the world is still asleep.” Maintaining the revivalist tone, she proposed, “It’s up to our imaginations to figure out what will happen when the 99 percent wakes up.”

Warrior’s remarks highlight how consequential marketing, advertising, and the swaggering keynotes of executives will be in creating the IoT’s consumer economy. The world will not just be exposed to new technologies; it will be woken up, given the gift of sight, with every conceivable object connected to the network. In the same way, Nest CEO Tony Fadell, commenting on his company’s acquisition by Google, wrote that his goal has always been to create a “conscious home” — “a home that is more thoughtful, intuitive.”

On a more prosaic level, “smart” has been cast as the logical, prudent alternative to dumb. Sure, we don’t need toothbrushes to monitor our precise brushstrokes and offer real-time reports, as the Bluetooth-enabled, Kickstarter-funded toothbrush described in a recent article in The Guardian can. There is no epidemic of tooth decay that could not be helped by wider access to dental care, better diet and hygiene, and regular flossing. But these solutions are so obvious, so low-tech and quotidian, as to be practically banal. They don’t allow for the advent of an entirely new product class or industry. They don’t shimmer with the dubious promise of better living through data. They don’t allow one to “transform otherwise boring dental hygiene activities into a competitive family game.” The presumption that 90 seconds of hygiene needs competition to become interesting and worth doing is among the more pure distillations of contemporary capitalism. Internet of Things devices, and the software associated with them, are frequently gamified, which is to say that they draw us into performances of productivity that enrich someone else.

In advertising from AT&T and others, the new image of the responsible homeowner is an informationally aware one. His house is always accessible and transparent to him (and to the corporations, backed by law enforcement, providing these services). The smart home, in turn, has its own particular hierarchy, in which the manager of the home’s smart surveillance system exercises dominance over children, spouses, domestic workers, and others who don’t have control of these tools and don’t know when they are being watched. This is being pushed despite the fact that violent crime has been declining in the United States for years, and those who do suffer most from crime — the poor — aren’t offered many options in the Internet of Things marketplace, except to submit to networked CCTV and police data-mining to determine their risk level.

But for gun-averse liberals, ensconced in low-crime neighborhoods, smart-home and digitized home-security platforms allow them to act out their own kind of security theater. Each home becomes a techno-castle, secured by the surveillance net.

The surveillance-laden house may rob children of essential opportunities for privacy and personal development. One AT&T video, for instance, shows a middle-aged father woken up in bed by an alert from his security system. He grabs his tablet computer and, sotto voce, tells his wife that someone’s outside. But it’s not an intruder, he says wryly. The camera cuts to shows a teenage girl, on the tail end of a date, talking to a boy outside the home. Will they or won’t they kiss? Suddenly, a garish bloom of light: the father has activated the home’s outdoor lights. The teens realize they are being monitored. Back in the master bedroom, the parents cackle. To be unmonitored is to be free — free to be oneself and to make mistakes. A home ringed with motion-activated lights, sensors, and cameras, all overseen by imperious parents, would allow for little of that.

In the conventional libertarian style, the Internet of Things offloads responsibilities to individuals, claiming to empower them with data, while neglecting to address collective, social issues. And meanwhile, corporations benefit from the increased knowledge of consumers’ habits, proclivities, and needs, even learning information that device owners don’t know themselves.

Tech industry doyen Tim O’Reilly has predicted that “insurance is going to be the native business model for the Internet of Things.” To enact this business model, companies will use networked devices to pull more data on customers and employees and reward behavior accordingly, as some large corporations, like BP, have already done in partnership with health-care companies. As the number of data sources proliferate, opportunities increase for behavioral management as well as on-the-fly price discrimination.

Through the dispersed system of mass monitoring and feedback, behaviors and cultures become standardized, directed at the algorithmic level. A British insurer called Drive Like a Girl uses in-car telemetry to track drivers’ habits. The company says that its data shows that women drive better and are cheaper to insure, so they deserve to pay lower rates. So far, perhaps, so good. Except that the European Union has instituted regulations stating that insurers can’t offer different rates based on gender, so Drive Like a Girl is using tracking systems to get around that rule, reflecting the fear of many IoT critics that vast data collection may help banks, realtors, stores, and other entities dodge the protections put in place by the Fair Credit Reporting Act, HIPPA, and other regulatory measures.

This insurer also exemplifies how algorithmic biases can become regressive social forces. From its name to its site design to how its telematics technology is implemented, Drive Like a Girl is essentializing what “driving like a girl” means — it’s safe, it’s pink, it’s happy, it’s gendered. It is also, according to this actuarial morality, a form of good citizenship. But what if a bank promised to offer loan terms to help someone “borrow like a white person,” premised on the notion that white people were associated with better loan repayments? We would call it discriminatory and question the underlying data and methodologies and cite histories of oppression and lack of access to banking services. With automated, IoT-driven marketplaces there is no room for taking into account these complex sensitivities.

As the Internet of Things expands, we may witness an uncomfortable feature creep. When the iPhone was introduced, few thought its gyroscopes would be used to track a user’s steps, sleep patterns, or heartbeat. Software upgrades or novel apps can be used to exploit hardware’s hidden capacities, not unlike the way hackers have used vending machines and HVAC systems to gain access to corporate computer networks. To that end, many smart thermostats use “geofencing” or motion sensors to detect when people are at home, which allows the device to adjust the temperature accordingly. A company, particularly a conglomerate like Google with its fingers in many networked pies, could use that information to serve up ads on other screens or nudge users towards desired behaviors. As Jathan Sadowski has pointed out here, the relatively trivial benefit of a fridge alerting you when you’ve run out of a product could be used to encourage you to buy specially advertised items. Will you buy the ice cream for which your freezer is offering a coupon? Or will you consult your health-insurance app and decide that it’s not worth the temporary spike in your premiums?

This combination of interconnectivity and feature creep makes Apple’s decision to introduce platforms for home automation and health-monitoring seem rather cunning. Cupertino is delegating much of the work to third-party device makers and programmers — just as it did with its music and app stores — while retaining control of the infrastructure and the data passing through it. (Transit fees will be assessed accordingly.) The writer and editor Matt Buchanan, lately of The Awl, has pointed out that, in shopping for devices, we are increasingly choosing among competing digital ecosystems in which we want to live. Apple seems to have apprehended this trend, but so have two other large industry groups — the Open Interconnect Consortium and the AllSeen alliance — with each offering its own open standard for connecting many disparate devices. Market competition, then, may be one of the main barriers to fulfilling the prophetic promise of the Internet of Things: to make this ecosystem seamless, intelligent, self-directed, and mostly invisible to those within it. For this vision to come true, you would have to give one company full dominion over the infrastructure of your life.

Whoever prevails in this competition to connect, well, everything, it’s worth remembering that while the smartphone or computer screen serves as an access point, the real work — the constant processing, assessment, and feedback mechanisms allowing insurance rates to be adjusted in real-time — is done in the corporate cloud. That is also where the control lies. To wrest it back, we will need to learn to appreciate the virtues of products that are dumb and disconnected once again.

Link: The Melancholy of Subculture Society

If you crack open some of the mustier books about the Internet—you know the ones I’m talking about, the ones which invoke Roland Barthes and discuss the sexual transgressing of MUDs—one of the few still relevant criticisms is the concern that the Internet by uniting small groups will divide larger ones.

Surfing alone

You may remember this as the Bowling Alone thesis applied to the Internet; it got some traction in the late 1990s. The basic idea is: electronic entertainment devices grows in sophistication and inexpensiveness as the years pass, until by the 1980s and 1990s, they have spread across the globe and have devoured multiple generations of children; these devices are more pernicious than traditional geeky fares inasmuch as they are often best pursued solo. Spending months mastering Super Mario Brosall alone—is a bad way to grow up normal.

And then there were none

The 4 or 5 person Dungeons & Dragons party (with a dungeon master) gives way to the classic arcade with its heated duels and oneupsmanship; the arcade gives way to the flickering console in the bedroom with one playing Final Fantasy VIIalone. The increased graphical realism, the more ergonomic controllers, the introduction of genuinely challenging AI techniques… Trend after trend was rendering a human opponent unnecessary. And gamer after gamer was now playing alone.

Perhaps, the critic says, the rise of the Internet has ameliorated that distressing trend—the trends favored no connectivity at first, but then there was finally enough surplus computing power and bandwidth for massive connectivity to become the order of the day.

It is much more satisfactory and social to play MMORPGs on your PC than single-player RPGS, much more satisfactory to kill human players in Halo matches than alien AIs. The machines finally connect humans to humans, not human to machine. We’re forced to learn some basic social skills, to maintain some connections. We’re no longer retreating into our little cocoons, interacting with no humans.

Welcome to the N.H.K.!

But, the critic continues, things still are not well. We are still alienated from one another. The rise of the connected machines still facilitates withdrawal and isolation. It presents the specter of the hikikomorithe person who ceases to exist in the physical realm as much as possible. It is a Japanese term, of course. They are 5 years further in our future than we are (or perhaps one should say, were). Gibson writes, back in 2001:

The Japanese seem to the rest of us to live several measurable clicks down the time line. The Japanese are the ultimate Early Adopters, and the sort of fiction I write behooves me to pay serious heed to that. If you believe, as I do, that all cultural change is essentially technologically driven, you pay attention to the Japanese. They’ve been doing it for more than a century now, and they really do have a head start on the rest of us, if only in terms of what we used to call future shock (but which is now simply the one constant in all our lives).

Gibson also discusses the Mobile Girl and text messaging; that culture began really showing up in America around 2005—SidekicksTwitter etc. You can do anything with a cellphone: order food, do your job, read & write novels, maintain a lively social life, engage in social status envy (She has a smaller phone, and a larger collection of collectibles on her cellphone strap! OMG!)… Which is just another way of saying You can do anything without seeing people, just by writing digital messages. (And this in a country with one of the most undigitizable writing systems in existence!)

The hikikomori withdraws from all personal contact. The hikikomori does not hang out at the local pub, swilling down the brewskis as everyone cheers on the home team. The hikikomori is not gossiping at the rotary club nor with the Lions or mummers or Veterans or Knights. hikikomoris do none of that. They aren’t working, they aren’t hanging out with friends.

The Paradoxical solitude and omnipotence of the otaku, the new century’s ultimate enthusiast: the glory and terror inherent of the absolute narrowing of personal bandwidth. —William Gibson, Shiny balls of Mud (TATE 2002)

So what are they doing with their 16 waking hours a day?

Opting out

But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing…the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode. We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way up close and personal profiles of professional athletes strain so hard to find evidence of a rounded human life—outside interests and activities, values beyond the sport. We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small…[Tennis player Michael] Joyce is, in other words, a complete man, though in a grotesquely limited way…Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well. — David Foster WallaceThe String Theory (July 1996 Esquire)

They’re not preoccupied with our culture - they’re participating in their own subculture. It’s the natural progression of the otaku. They are fighting on Azeroth, or fiercely pursuing their dojinshi career, or… There are many subcultures linked and united by the Internet, for good and ill. For every charitable or benevolent subculture (eg. free software) there is one of mixed benefits (World of Warcraft), and one outright harmful (ex. fans of eating disorders, child pornography).

The point the critic wants to make is that life is short and a zero-sum game. You lose a third of the day to sleep, another third to making a living, and now you’ve little left. To be really productive, you can’t divide your energies across multiple cultures—you can’t be truly successful in mainstream culture, and at the same time be able to devote enough effort in the field of, say, mechanical models, to be called an Otaking. A straddler takes onto his head the overhead of learning and participating in both, and receives no benefits (he will suffer socially in the esteem of the normals, and will be able to achieve little in his hobby due to lack of time and a desire to not go overboard).

The otaku & hikikomori recognizes this dilemma and he chooses—to reject normal life! He rejects life in the larger culture for his subculture. It’s a simple matter of comparative advantage; it’s easier to be a big fish in a small pond than in a large one.

The bigger screen

Have you ever woken up from a dream that was so much more pleasant than real life that you wish you could fall back to sleep and return to the dream?…For some, World of Warcraft is like a dream they don’t have to wake up from - a world better than the real world because their efforts are actually rewarded –[Half Sigma, Status, masturbation, wasted time, and WoW]

EVE Online is unique in gaming in that we have always played on the same massive server in the same online universe since May 2003 when it first went live. We not only understand the harsh penalties for failure, but also how longevity and persistence is rewarded with success. When you have over 60,000 people on weekends dealing, scheming, and shooting each other it attracts a certain type of gamer. It’s not a quick fix kind of game. We enjoy building things that last, be they virtual spaceships or real life friendships that together translate into massive Empires and enduring legacies. Those of us who play understand that one man really can truly make a difference in our world. –Mark Seleene HeardVile Rat eulogy 2012

As ever more opt out, the larger culture is damaged. The culture begins to fragment back into pieces. The disconnect can be profound; an American anime geek has more in common with a Japanese anime geek (who is of a different ethnicity, a different culture, a different religion, a different language…) than he does with an American involved in the evangelical Christian subculture. There is essentially no common ground - our 2 countrymen probably can’t even agree on objective matters like governance or evolution!

With enough of these gaps, where is American or French culture? Such cultural identities take centuries to coalesce—France did not speak French until the 1900s (as The Discovery of France recounts), and Han China is still digesting and assimilating its many minorities and outlying regions. America, of course, had it relatively easy in starting with a small founder population which could just exterminate the natives.

The national identity fragments under the assault of burgeoning subcultures. At last, the critic beholds the natural endpoint of this process: the nation is some lines on a map, some laws you follow. No one particularly cares about it. The geek thinks, Meh: here, Canada, London, Japan, Singapore—as long as FedEx can reach me and there’s a good Internet connection, what’s the difference? (Nor are the technically-inclined alone in this.)

You can test this yourself. Tell yourself—the country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled. If your mental reply goes something like, Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest…, then you are headed down the path of opting out.

This is how the paradox works: the Internet breaks the larger culture by letting members flee to smaller subcultures. And the critics think this is bad. They like the broader culture, and deep down, furries and latex fetishists really bother them. They just plain don’t like those weirdo deviants.

But I can get a higher score!

In the future, everyone will be world-famous for 15 minutes.

Let’s look at another angle.

Monoculture

Irony has only emergency use. Carried over time, it is the voice of the trapped who have come to enjoy their cage.

One can’t opt out of culture. There is no view from nowhere. To a great extent, we are our cultural artifacts—our possessions, our complexes of memes, our habits and objects of disgust are all cultural. You are always part of a culture.

Suppose there were only 1 worldwide culture, with no subcultures. The overriding obsession of this culture will be… let’s make it money. People are absolutely obsessed with money—how it is made, acquired, degraded, etc. More importantly, status is defined just by how much you have earned in your life; in practice, tie-breakers include how fast you made it, what circumstances you made it in (everyone admires a person who became a billionaire in a depression more than a good-times billionaire, in the same way we admire the novelist in the freezing garret more than the comfortable academic), and so on.

This isn’t too absurd a scenario: subjects feed on themselves and develop details and complexity as effort is invested in them. Money could well absorb the collective efforts of 7 billion people—already many people act just this way.

But what effect does this have on people? I can tell you: the average person is going to be miserable. If everyone genuinely buys into this culture, then they have to be. Their talents at piano playing, or cooking, or programming, or any form of artistry or scholarly pursuit are denigrated and count for naught. The world has become too big - it did not use to be so big, people so powerless of what is going on:

"Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole.

Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….”

You can’t make a mark on it unless there are almost as many ways to make marks as there are persons.

To put it another way: women suffer enough from comparing themselves to media images. If you want a vision of this future, imagine everyone being an anorexic teenager who hates her body—forever.

We all value social esteem. We need to know somebody thinks well of us. We’re tribal monkeys; ostracism means death.

Jaron Lanier: I’d like to hypothesize one civilizing force, which is the perception of multiple overlapping hierarchies of status. I’ve observed this to be helpful in work dealing with rehabilitating gang members in Oakland. When there are multiple overlapping hierarchies of status there is more of a chance of people not fighting their superior within the status chain. And the more severe the imposition of the single hierarchy in people’s lives, the more likely they are to engage in conflict with one another. Part of America’s success is the confusion factor of understanding how to assess somebody’s status.

Steven Pinker: That’s a profound observation. There are studies showing that violence is more common when people are confined to one pecking order, and all of their social worth depends on where they are in that hierarchy, whereas if they belong to multiple overlapping groups, they can always seek affirmations of worth elsewhere. For example, if I do something stupid when I’m driving, and someone gives me the finger and calls me an asshole, it’s not the end of the world: I think to myself, I’m a tenured professor at Harvard. On the other hand, if status among men in the street was my only source of worth in life, I might have road rage and pull out a gun. Modernity comprises a lot of things, and it’s hard to tease them apart. But I suspect that when you’re not confined to a village or a clan, and you can seek your fortunes in a wide world, that is a pacifying force for exactly that reason.

Think of the people you know. How many of them can compete on purely financial grounds? How many can compare to the chimps at the top of the financial heap without feeling like an utter failure, a miserable loser? Not many. I can’t think of anyone I know who wouldn’t be at least a little unhappy. Some of them are pretty well off, but it’s awfully hard to compare with billionaires in their department. There’s no way to prove that this version of subcultures is the right one (perhaps fragmenting the culture fragments the possible status), but when I look at simple models, this version seems plausible to me and to explain some deep trends like monogamy.

Subcultures set you free

If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself. Enjoy your achievements as well as your plans. Keep interested in your own career, however humble; it is a real possession in the changing fortunes of time.

Having a society in which an artist can mingle as social equals with the billionaire and admit the Nobel scientists and the philanthropist is fundamental to our mental health! If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand.

It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and a cat may look at a king.

Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.

(Source: sunrec)

Link: Forever Alone: Why Loneliness Matters in the Social Age

I got up and went over and looked out the window. I felt so lonesome, all of a sudden. I almost wished I was dead. Boy, did I feel rotten. I felt so damn lonesome. I just didn’t want to hang around any more. It made me too sad and lonesome.

— J.D. Salinger in Catcher in the Rye

Loneliness was a problem I experienced most poignantly in college. In the three years I spent at Carnegie Mellon, the crippling effects of loneliness slowly pecked away at my enthusiasm for learning and for life, until I was drowning in an endless depressive haze that never completely cleared until I left Pittsburgh.

It wasn’t for lack of trying either. At the warm behest of the orientation counselors, I joined just the right number of clubs, participated in most of the dorm activities, and tried to expand my social portfolio as much as possible.

None of it worked.

To the extent that I sought out CAPS (our student psych and counseling service) for help, the platitudes they offered as advice (“Just put yourself out there!”) only served to confirm my suspicion that loneliness isn’t a very visible problem. (After all, the cure for loneliness isn’t exactly something that could be prescribed. “Have you considered transferring?” they finally suggested, after exhausting their list of thought-terminating clichés. I graduated early instead.)

As prolonged loneliness took its toll, I became very unhappy—to put it lightly—and even in retrospect I have difficulty pinpointing a specific cause. It wasn’t that I didn’t know anyone or failed to make any friends, and it wasn’t that I was alonemore than I liked.

Sure, I could point my finger at the abysmally fickle weather patterns of Pittsburgh, or the pseudo-suburban bubble that envelops the campus. There might even be a correlation between my academic dissonance with computer science and my feelings of loneliness. I might also just be an extremely unlikable person.

For whatever the reason (or a confluence thereof) the reality remained that I struggled with loneliness throughout my time in college.

+++

I recall a conversation with my friend Dev one particular evening on the patio of our dormitory. It was the beginning of my junior and last year at CMU, and I had just finished throwing an ice cream party for the residents I oversaw as an RA.

“Glad to be back?” he asked as he plopped down on a lawn chair beside me.

“No, not really.”

The sun was setting, and any good feelings about the upcoming semester with it. We made small talk about the school in general, as he had recently transferred, but eventually Dev asked me if I was happy there.

“No, not really.”

“Why do you think you’re so miserable here?”

“I don’t know. A lot of things, I guess. But mostly because I feel lonely. Like I don’t belong, like I can’t relate to or connect with anyone on an emotional level. I haven’t made any quality relationships here that I would look back on with any fond memories. Fuck… I don’t know what to do.”

College, at least for me, was a harrowing exercise in how helplessly debilitating, hopelessly soul-crushing, and at times life-threatening loneliness could be. It’s a problem nobody talks about, and it’s been a subject of much personal relevance and interest.

Loneliness as a Health Problem

A recent article published on Slate outlines the hidden dangers of social isolation. Chronic loneliness, as Jessica Olien discovered, poses serious health risks that not only impact mental health but physiological well-being as well.

The lack of quality social relationships in a person’s life has been linked to an increased mortality risk comparable to smoking and alcohol consumption and exceeds the influence of other risk factors like physical inactivity and obesity. It’s hard to brush off loneliness as a character flaw or an ephemeral feeling when you realize it kills more people than obesity.

Research also shows that loneliness diminishes sleep quality and impairs physiological function, in some cases reducing immune function and boosting inflammation, which increases risk for diabetes and heart disease.

Why hasn’t loneliness gotten much attention as a medical problem? Olien shares the following observation:

As a culture we obsess over strategies to prevent obesity. We provide resources to help people quit smoking. But I have never had a doctor ask me how much meaningful social interaction I am getting. Even if a doctor did ask, it is not as though there is a prescription for meaningful social interaction.

As a society we look down upon those who admit to being lonely, we cast and ostracize them with labels like “loners” insofar as they prefer to hide behind shame and doubt rather than speak up. This dynamic only makes it harder to devise solutions to what is clearly a larger societal issue, and it certainly brings to question the effects of culture on our perception of loneliness as a problem.

Loneliness as a Culture Problem

Stephen Fry, in a blog post titled Only the Lonely which explains his suicide attempt last year, describes in detail his struggle with depression. His account offers a rare and candid glimpse into the reality of loneliness with which those afflicted often hide from the public:

Lonely? I get invitation cards through the post almost every day. I shall be in the Royal Box at Wimbledon and I have serious and generous offers from friends asking me to join them in the South of France, Italy, Sicily, South Africa, British Columbia and America this summer. I have two months to start a book before I go off to Broadway for a run of Twelfth Night there.

I can read back that last sentence and see that, bipolar or not, if I’m under treatment and not actually depressed, what the fuck right do I have to be lonely, unhappy or forlorn? I don’t have the right. But there again I don’t have the right not to have those feelings. Feelings are not something to which one does or does not have rights.

In the end loneliness is the most terrible and contradictory of my problems.

In the United States, approximately 60 million people, or 20% of the population, feel lonely. According to the General Social Survey, between 1985 and 2004, the number of people with whom the average American discusses important mattersdecreased from three to two, and the number with no one to discuss important matters with tripled.

Modernization has been cited as a reason for the intensification of loneliness in every society around the world, attributed to greater migration, smaller household sizes, and a larger degree of media consumption.

In Japan, loneliness is an even more pervasive, layered problem mired in cultural parochialisms. Gideon Lewis-Kraus pens a beautiful narrative on Harper’s in which he describes his foray into the world of Japanese co-sleeping cafés:

“Why do you think he came here, to the sleeping café?”

“He wanted five-second hug maybe because he had no one to hug. Japan ishaji culture. Shame. Is shame culture. Or maybe also is shyness. I don’t know why. Tokyo people … very alone. And he does not have … ” She thought for a second, shrugged, reached for her phone. “Please hold moment.”

She held it close to her face, multitouched the screen not with thumb and forefinger but with tiny forefinger and middle finger. I could hear another customer whispering in Japanese in the silk-walled cubicle at our feet. His co-sleeper laughed loudly, then laughed softly. Yukiko tapped a button and shone the phone at my face. The screen said COURAGE.

It took an enormous effort for me to come to terms with my losing battle with loneliness and the ensuing depression at CMU, and an even greater leap of faith to reach out for help. (That it was to no avail is another story altogether.) But what is even more disconcerting to me is that the general stigma against loneliness and mental health issues, hinging on an unhealthy stress culture, makes it hard for afflicted students to seek assistance at all.

As Olien puts it, “In a society that judges you based on how expansive your social networks appear, loneliness is difficult to fess up to. It feels shameful.”

To truly combat loneliness from a cultural angle, we need to start by examining our own fears about being alone and to recognize that as humans, loneliness is often symptomatic of our unfulfilled social needs. Most importantly, we need to accept that it’s okay to feel lonely. Fry, signing off on his heartfelt post, offers this insight:

Loneliness is not much written about (my spell-check wanted me to say that loveliness is not much written about—how wrong that is) but humankind is a social species and maybe it’s something we should think about more than we do.

Loneliness as a Technology Problem

Technology, and by extension media consumption in the Internet age, adds the most perplexing (and perhaps the most interesting) dimension to the loneliness problem. As it turns out, technology isn’t necessarily helping us feel more connected; in some cases, it makes loneliness worse.

The amount of time you spend on Facebook, as a recent study found, is inversely related to how happy you feel throughout the day.

Take a moment to watch this video.

It’s a powerful, sombering reminder that our growing dependence on technology to communicate has serious social repercussions, to which Cohen presents his central thesis:

We are lonely, but we’re afraid of intimacy, while the social networks offer us three gratifying fantasies: 1) That we can put our attention wherever we want it to be. 2) That we will always be heard. 3) That we will never have to be alone.

And that third idea, that we will never have to be alone, is central to changing our psyches. It’s shaping a new way of being. The best way to describe it is:

I share, therefore I am.

Public discourse on the cultural ramifications of technology is certainly not a recent development, and the general sentiment that our perverse obsession with sharing will be humanity’s downfall continues to echo in various forms around the web: articles proclaiming that Instagram is ruining people’s lives, the existence of a section on Reddit called cringepics where people congregate to ridicule things others post on the Internet, the increasing number of self-proclaimed “social media gurus” on Twitter, to name a few.

The signs seem to suggest we have reached a tipping point for “social” media that’s not very social on a personal level, but whether it means a catastrophic implosion or a gradual return to more authentic forms of interpersonal communications remains to be seen.

While technology has been a source of social isolation for many, it has the capacity to alleviate loneliness as well. A study funded by the online dating site eHarmony shows that couples who met online are less likely to divorce and achieve more marital satisfaction than those who met in real life.

The same model could potentially be applied to friendships, and it’s frustrating to see that there aren’t more startups leveraging this opportunity when the problem is so immediate and in need of solutions. It’s a matter of exposure and education on the truths of loneliness, and unfortunately we’re just not there yet.

+++

The perils of loneliness shouldn’t be overlooked in an increasingly hyperconnected world that often tells another story through rose-tinted lenses. Rather, the gravity of loneliness should be addressed and brought to light as a multifaceted problem, one often muted and stigmatized in our society. I learned firsthand how painfully real of a problem loneliness could be, and more should be done to spread its awareness and to help those affected.

“What do you think I should do?” I looked at Dev as the last traces of sunlight teetered over the top of Morewood Gardens. It was a rhetorical question—things weren’t about to get better.

“Find better people,” he replied.

I offered him a weak smile in return, but little did I know then how prescient those words were.

In the year that followed, I started a fraternity with some of the best kids I’d come to know (Dev included), graduated college and moved to San Francisco, made some of the best friends I’ve ever had, and never looked back, if only to remember, and remember well, that it’s never easy being lonely.

Link: Pandora's Vox

Carmen “humdog” Hermosillo’s essay Pandora’s Vox, an analysis of internet communities, remains startlingly accurate 20 years later. 

When I went into cyberspace I went into it thinking that it was a place like any other place and that it would be a human interaction like any other human interaction. I was wrong when I thought that. It was a terrible mistake. 



The very first understanding that I had that it was not a place like any place and that the interaction would be different was when people began to talk to me as though I were a man. When they wrote about me in the third person, they would say ‘he.’ it interested me to have people think I was ‘he’ instead of ‘she’ and so at first I did not say anything. I grinned and let them think I was ‘he.’ this went on for a little while and it was fun but after a while I was uncomfortable. Finally I said unto them that I, humdog, was a woman and not a man. This surprised them. At that moment I realized that the dissolution of gender-category was something that was happening everywhere, and perhaps it was only just very obvious on the net. This is the extent of my homage to Gender On The Net. 



I suspect that cyberspace exists because it is the purest manifestation of the mass (masse) as Jean Beaudrilliard described it. It is a black hole; it absorbs energy and personality and then re-presents it as spectacle. People tend to express their vision of the mass as a kind of imaginary parade of blue-collar workers, their muscle-bound arms raised in defiant salute. Sometimes in this vision they are holding wrenches in their hands. Anyway, this image has its origins in Marx and it is as Romantic as a dozen long-stemmed red roses. The mass is more like one of those faceless dolls you find in nostalgia-craft shops: limp, cute, and silent. When I say ‘cute’ I am including its macabre and sinister aspects within my definition. 



It is fashionable to suggest that cyberspace is some kind of _island of the blessed_ where people are free to indulge and express their Individuality. Some people write about cyberspace as though it were a 60′s utopia. In reality, this is not true. Major online services, like CompuServe and America online, regularly guide and censor discourse. Even some allegedly free-wheeling (albeit politically correct) boards like the WELL censor discourse. The difference is only a matter of the method and degree. What interests me about this, however, is that to the mass, the debate about freedom of expression exists only in terms of whether or not you can say fuck or look at sexually explicit pictures. I have a quaint view that makes me think that discussing the ability to write ‘fuck’ or worrying about the ability to look at pictures of sexual acts constitutes The Least Of Our Problems surrounding freedom of expression. 



Western society has a problem with appearance and reality. It wants to split them off from each other, make one more real than the other, and invest one with more meaning than it does the other. There is two people who have something to say about this: Nietzsche and Baudrilliard. I invoke his or her names in case somebody thinks I made this up. Nietzsche thinks that the conflict over these ideas cannot be resolved. Baudrilliard thinks that it was resolved and that this is how come some people think that communities can be virtual: we prefer simulation (simulacra) to reality. Image and simulacra exert tremendous power upon culture. And it is this tension that informs all the debates about Real and Not-Real that infect cyberspace with regards to identity, relationship, gender, discourse, and community. Almost every discussion in cyberspace, about cyberspace, boils down to some sort of debate about Truth-In-Packaging. 



Cyberspace is a mostly a silent place. In its silence it shows itself to be an expression of the mass. One might question the idea of silence in a place where millions of user-ids parade around like angels of light, looking to see whom they might, so to speak, consume. The silence is nonetheless present and it is most present, paradoxically at the moment that the user-id speaks. When the user-id posts to a board, it does so while dwelling within an illusion that no one is present. Language in cyberspace is a frozen landscape. 



I have seen many people spill their guts on-line, and I did so myself until, at last, I began to see that I had commoditized myself. Commodification means that you turn something into a product, which has a money-value. In the nineteenth century, commodities were made in factories, which Karl Marx called ‘the means of production.’ capitalists were people who owned the means of production, and the commodities were made by workers who were mostly exploited. I created my interior thoughts as a means of production for the corporation that owned the board I was posting to, and that commodity was being sold to other commodity/consumer entities as entertainment. That means that I sold my soul like a tennis shoe and I derived no profit from the sale of my soul. People who post frequently on boards appear to know that they are factory equipment and tennis shoes, and sometimes trade sends and email about how their contributions are not appreciated by management. 

As if this were not enough, all of my words were made immortal by means of tape backups. Furthermore, I was paying two bucks an hour for the privilege of commodifying and exposing myself. Worse still, I was subjecting myself to the possibility of scrutiny by such friendly folks as the FBI: they can, and have, downloaded pretty much whatever they damn well please. The rhetoric in cyberspace is liberation-speak. The reality is that cyberspace is an increasingly efficient tool of surveillance with which people have a voluntary relationship. 



Carmen “humdog” Hermosillo’s essay Pandora’s Vox, an analysis of internet communities, remains startlingly accurate 20 years later. You may recognize parts of it from Adam Curtis’ documentary All Watched Over by Machines of Loving Grace.

Proponents of so-called cyber-communities rarely emphasize the economic, business-mind nature of the community: many cyber-communities are businesses that rely upon the commodification of human interaction. They market their businesses by appeal to hysterical identification and fetishism no more or less than the corporations that brought us the two hundred dollar athletic shoe. Proponents of cyber- community do not often mention that these conferencing systems are rarely culturally or ethnically diverse, although they are quick to embrace the idea of cultural and ethnic diversity. They rarely address the whitebeard demographics of cyberspace except when these demographics conflict with the upward-mobility concerns of white, middle class females under the rubric of orthodox academic Feminism.

Link: Twitter: First Thought, Worst Thought

It’s fascinating and horrifying to observe the spectacles of humiliation generated by social media.

One of the strange and slightly creepy pleasures that I get from using Twitter is observing, in real time, the disappearance of words from my stream as they are deleted by their regretful authors. It’s a rare and fleeting sight, this emergency recall of language, and I find it touching, as though the person had reached out to pluck his words from the air before they could set about doing their disastrous work in the world, making their author seem boring or unfunny or ignorant or glib or stupid. And whenever this happens, I find myself wanting to know what caused this sudden reversal. What were the tweet’s defects? Was it a simple typo? Was there some fatal miscalculation of humor or analysis? Was it a clumsily calibrated subtweet? What, in other words, was the proximity to disaster? I, too, have deleted the occasional tweet; I know the sudden chill of having said something misjudged or stupid, the panicked fumble to strike it from the official record of utterance, and the furtive hope that nobody had time to read it.

Any act of writing creates conditions for the author’s possible mortification. There is, I think, a trace of shame in the very enterprise of tweeting, a certain low-level ignominy to asking a question that receives no response, to offering up a witticism that fails to make its way in the world, that never receives the blessing of being retweeted or favorited. The stupidity and triviality of this worsens, rather than alleviates, the shame, adding to the experience a kind of second-order shame: a shame about the shame. My point, I suppose, is that the possibility of embarrassment is ever-present with Twitter—it inheres in the form itself unless you’re the kind of charmed (or cursed) soul for whom embarrassment is never a possibility to begin with.

It’s fascinating and horrifying to observe the spectacles of humiliation generated by social media at seemingly decreasing intervals, to witness the speed and efficiency with which individuals are isolated and subjected to mass paroxysms of ridicule and condemnation. You may remember that moment, way back in the dying days of 2013, when, in the minutes before boarding a flight to South Africa, a P.R. executive named Justine Sacco tweeted “Going to Africa. Hope I don’t get AIDS. Just kidding! I’m white.” In the twelve hours that she spent en route to Cape Town, aloft and offline, she became the unknowing subject of a kind of ruinous flash-fame: her tweet was posted on Gawker and went viral, drawing the anger and derision of thousands of people who knew only two things about her: that she was the author of this twelve-word disaster of misfired irony and that she was the director of corporate communications for the massive media conglomerate I.A.C. There was a barrage of violent misogyny, terrible in its blunt force and grim inevitability. Somebody sourced Sacco’s flight details, at which point the hashtag #HasJustineLandedYet started doing a brisk trade on Twitter. Somebody else took it upon himself to interview her father at the airport and post the details to Twitter, for the instruction and delight of the hashtag’s followers. The New York Times covered the story. Sacco touched down in Cape Town oblivious to the various ways, bizarre and very real, in which her life had changed. She was, in the end, swiftly and publicly fired.

This was not a celebrity or a politician tweeting something racist or offensive; Sacco was unknown, so this was not a case of a public reputation set off course by a single revealing misstep. This misstep was her public reputation. She will likely be remembered as “that P.R. person who tweeted that awful racist joke that time”; her identity will always be tethered to those four smugly telegraphic sentences, to the memory of how they provided a lightning rod for an electrical storm of anger about heedless white privilege and ignorant racial assumptions. Whether she was displaying these qualities or making a botched attempt at a self-reflexive joke about them—an interpretation which, intentional fallacy be damned, I find pretty plausible—didn’t, in the end, have much bearing on the affair. She became a symbol of everything that is ugly and wrong about the way white people think and don’t think about people of color, about the way the privileged of the planet think and don’t think about the poor. As Roxane Gay put it in an essay on her ambivalence about the public shaming of Sacco: “The world is full of unanswered injustice and more often than not we choke on it. When you consider everything we have to fight, it makes sense that so many people rally around something like the hashtag #HasJustineLandedYet. In this one small way, we are, for a moment, less impotent.”

As Sacco’s flight made its way south, over the heads of the people in whose name the Internet had decided she should be punished, I found myself trying to imagine what she might have been thinking. It was likely, of course, that the tweet wasn’t on her mind at all, that she was thinking about meeting her family at the arrivals lounge in Cape Town, looking forward to the Christmas holiday she was going to spend with them. But then I began imagining that she might, after all, have been thinking of her last tweet, maybe even having second thoughts about it. As early as her takeoff from Heathrow, perhaps, right as the plane broke through the surface of network signals, leaving behind the possibility of tweet-deletion, she may have realized how people would react to her joke, that it might be taken as a reflection of her own corruption or stupidity or malice. By that point, it would have been too late to do anything about it, too late to pluck her words from the air.

And, of course, I wasn’t really imagining Justine Sacco, of whom I knew and still know next to nothing but, rather, myself in her situation: the gathering panic I would feel if it had been me up there, running through the possible interpretations of the awful joke I’d just made and could not unmake—the various things, true and false, it could be taken to reveal about me.

In his strange and unsettling book “Humiliation,” the poet and essayist Wayne Koestenbaum writes about the way in which public humiliation “excites” his empathy. “By imagining what they feel, or might feel,” he writes, “I learn something about what I already feel, what I, as a human being, was born sensing: that we all live on the edge of humiliation, in danger of being deported to that unkind country.” Justine Sacco is a deportee now; I’m trying to imagine what it must be like for her there in that unkind country, those twelve words repeating themselves mindlessly over and over again in her head, how the phrase “Just kidding!”—J.K.! J.K.!—must by now have lost all meaning or have taken on a whole new significance. In this mode of trial and punishment, I sometimes think of social media as being like the terrible apparatus at the center of Kafka’s “In the Penal Colony”: a mechanism of corrective torture, harrowing the letters of the transgression into the bodies of the condemned.

The weird randomness of this sudden mutation of person into meme is, in the end, what’s so haunting. This could just as well have happened to anyone—any of the thousands of people who say awful things on Twitter every day. It’s not that Sacco didn’t deserve to be taken to task, to be scorned for the clumsiness and hurtfulness of her joke; it’s that the corrective was so radically comprehensive and obliterating, and administered with such collective righteous giddiness. This is a new form of violence, a symbolic ritual of erasure where the condemned is made to stand for a whole class of person—to be cast, as an effigy of the world’s general awfulness, into a sudden abyss of fame.

Link: Now is Not Forever: The Ancient Recent Past

Sometimes the Internet surprises us with the past or, to be more precise, its own past. The other day my social media feed started to show the same clip over and over. It was one I had seen years before and forgotten about, back from the bottom of that overwhelming ocean of content available to us at any given moment. Why was it reappearing now, I wondered?

That’s a hard question to answer under any circumstances. My teenage daughter regularly shows me Internet discoveries that date from the mid-2000s. To her, they are fresh; to me, a reminder of just how difficult it is to predict what the storms of the information age will turn up. In the case of the clip I started seeing again the other day, however, the reemergence seemed less than random.

It’s a two-minute feature from a San Francisco television station about the electronic future of journalism, but from way back in 1981, long before the Internet as we know it came into focus. While there is a wide range of film and television from that era readily accessible to us, much of which can be consumed without being struck dumb by its datedness — Scarface or the first Star Wars trilogy, to name two obvious examples — its surviving news broadcasts seem uncanny. Factor in the subject matter of this one, predicting a future that already feels past to us, and the effect is greatly enhanced.

The more I kept seeing this clip in my feed, though, the more clear it became that its uncanniness didn’t just derive from the original feature’s depiction of primitive modems and computer monitors — and a Lady Di hairsyle — but also the fact that it had returned from the depths of the Internet to remind us, once more, that we did see this world coming.

The information age is doing strange things to our sense of history. If you drive in the United States, particularly in warm-weather places like California or Florida, you won’t have to look too hard to see cars from the 1980s still on the road. But a computer from that era seems truly ancient, as out of sync with our own times as a horse and buggy.

Stranger still is the feeling of datedness that pervades the Internet’s own history. For someone my daughter’s age, imagining life before YouTube is as unsettling a prospect as imagining life before indoor plumbing. And yet, even though she was only seven when the site debuted, she was already familiar with the Internet before then.

But it isn’t just young people who feel cut off from the Internet that existed prior to contemporary social media. Even though I can go on the Wayback Machine to check out sites I was visiting in the 1990s; even though I contributed to one of the first Internet publications, Bad Subjects: Political Education For Everyday Life, and can still access its content with ease; even though I know firsthand what it was like before broadband, when I would wait minutes for a single news story to load, my memories still seem to fail me. I remember, but dimly. I can recall experiences from pre-school in vivid detail, yet struggle to flesh out my Internet past from a decade ago, before I started using Gmail.

What the clip that resurfaced the other day makes clear is that history is more subjective than ever. Some parts seem to be moving at more or less the same pace that they did decades or even centuries ago. But others, particularly those that focus on computer technology, appear ten or even a hundred times as fast. If you don’t believe me, try picking up the mobile phone you used in 2008.

When he was working on the Passagenwerk, his sprawling project centered on nineteenth-century Parisian shopping arcades, Walter Benjamin made special note of how outdated those proto-malls seemed, less than a century after they had first appeared. These days, the depths of the Internet are full of such places, dormant pages that unnerve us with their “ancient” character, even though they are less than a decade old.

As Mark Fisher brilliant explains in his book Capitalist Realism, we live at a time when it is easier to imagine the end of the world than the end of capitalism. But there are plenty of people who have just as much difficulty imagining the end of Facebook, even though some of them were on MySpace and Friendster before it. That’s what makes evidence like the clip I’ve been discussing here so important. We need to be reminded that we are capable of living different lives, that we have, in fact, already lived them, so that we can turn our attention to living the lives we actually want to lead.

Link: Neil Postman on Cyberspace (1995)

Author and media scholar Neil Postman, head of the Culture and Communications at New York University, encourages caution when entering cyberspace. His book, Technopoly, the Surrender of Culture to Technology, puts the computer in historical perspective.

Neil Postman, thank you for joining us. How do you define cyberspace?

Cyberspace is a metaphorical idea which is supposed to be the space where your consciousness is located when you’re using computer technology on the Internet, for example, and I’m not entirely sure it’s such a useful term, but I think that’s what most people mean by it.

How does that strike you, I mean, that your consciousness is located somewhere other than in your body?

Well, the most interesting thing about the term for me is that it made me begin to think about where one’s consciousness is when interacting with other kinds of media, for example, even when you’re reading, where, where are you, what is the space in which your consciousness is located, and when you’re watching television, where, where are you, who are you, because people say with the Internet, for example, it’s a little different in that you’re always interacting or most of the time with another person. And when you’re in cyberspace, I suppose you can be anyone you want, and I think as this program indicates, it’s worth, it’s worth talking about because this is a new idea and something very different from face-to-face co-presence with another human being.

Do you think this is a good thing, or a bad thing, or you haven’t decided?

Well, no, I’ve mostly—(laughing)—I’ve mostly decided that new technology of this kind or any other kind is a kind of Faustian bargain. It always gives us something important but it also takes away something that’s important. That’s been true of the alphabet and the printing press and telegraphy right up through the computer. For instance, when I hear people talk about the information superhighway, it will become possible to shop at home and bank at home and get your texts at home and get entertainment at home and so on, I often wonder if this doesn’t signify the end of any meaningful community life. I mean, when two human beings get together, they’re co-present, there is built into it a certain responsibility we have for each other, and when people are co-present in family relationships and other relationships, that responsibility is there. You can’t just turn off a person. On the Internet, you can. And I wonder if this doesn’t diminish that built-in, human sense of responsibility we have for each other. Then also one wonders about social skills; that after all, talking to someone on the Internet is a different proposition from being in the same room with someone—not in terms of responsibility but just in terms of revealing who you are and discovering who the other person is. As a matter of fact, I’m one of the few people not only that you’re likely to interview but maybe ever meet who is opposed to the use of personal computers in school because school, it seems to me, has always largely been about how to learn as part of a group. School has never really been about individualized learning but about how to be socialized as a citizen and as a human being, so that we, we have important rules in school, always emphasizing the fact that one is part of a group. And I worry about the personal computer because it seems, once again to emphasize individualized learning, individualized activity.

What images come to your mind when you, when you think about what our lives will be like in cyberspace?

Well, the, the worst images are of people who are overloaded with information which they don’t know what to do with, have no sense of what is relevant and what is irrelevant, people who become information junkies.

What do you mean? How do you mean that?

Well, the problem in the 19th century with information was that we lived in a culture of information scarcity and so humanity addressed that problem beginning with photography and telegraphy and the–in the 1840s. We tried to solve the problem of overcoming the limitations of space, time, and form. And for about a hundred years, we worked on this problem, and we solved it in a spectacular way. And now, by solving that problem, we created a new problem, that people have never experienced before, information glut, information meaninglessness, information incoherence. I mean, if there are children starving in Somalia or any other place, it’s not because of insufficient information. And if crime is rampant in the streets in New York and Detroit and Chicago or wherever, it’s not because of insufficient information. And if people are getting divorced and mistreating their children and their sexism and racism are blights on our social life, none of that has anything to do with inadequate information. Now, along comes cyberspace and the information superhighway, and everyone seems to have the idea that, ah, here we can do it; if only we can have more access to more information faster and in more diverse forms at long last, we’ll be able to solve these problems. And I don’t think it has anything to do with it.

Do you believe that this–that the fact that people are more connected globally will lead to a greater degree of homogenization of the global society?

Here’s the puzzle about that, Charlayne. When everyone was–when McLuhan talked about the world becoming a global village and, and when people ask, as you did, about how connections can be made, everyone seemed to think that the world would become in, in some good sense more homogenous. But we seem to be experiencing the opposite. I mean, all over the world, we see a kind of reversion to tribalism. People are going back to their tribal roots in order to find a sense of identity. I mean, we see it in Russia, in Yugoslavia, in Canada, in the United States, I mean, in our own country. Why is that every group now not only is more aware of its own grievances but seems to want its own education? You know, we want an Afro-centric curriculum and a Korean-centric curriculum, and a Greek-centered curriculum. What is it about all this globalization of communication that is making people return to more–to smaller units of identity? It’s a puzzlement.

Well, what do you think the people, society should be doing to try and anticipate these negatives and be able to do something about them?

I think they should–everyone should be sensitive to certain questions. For example, when a new–confronted with a new technology, whether it’s a cellular phone or high definition television or cyberspace or Internet, the question–one question should be: What is the problem to which this technology is a solution? And the second question would be: Whose problem is it actually? And the third question would be: If there is a legitimate problem here that is solved by the technology, what other problems will be created by my using this technology? About six months ago, I bought a new Honda Accord, and the salesman told me that it had cruise control. And I asked him, “What is the problem to which cruise control is the solution?” By the way, there’s an extra charge for cruise control. And he said no one had ever asked him that before but then he said, “Well, it’s the problem of keeping your foot on the gas.” And I said, “Well, I’ve been driving for 35 years. I’ve never found that to be a problem.” I mean, am I using this technology, or is it using me, because in a technological culture, it is very easy to be swept up in the enthusiasm for technology, and of course, all the technophiles around, all the people who adore technology and are promoting it everywhere you turn.

Well, Neil Postman, thank you for all of your cautions.

Link: The Disconnectionists

“Unplugging” from the Internet isn’t about restoring the self so much as it about stifling the desire for autonomy that technology can inspire.

Once upon a pre-digital era, there existed a golden age of personal authenticity, a time before social-media profiles when we were more true to ourselves, when the sense of who we are was held firmly together by geographic space, physical reality, the visceral actuality of flesh. Without Klout-like metrics quantifying our worth, identity did not have to be oriented toward seeming successful or scheming for attention.

According to this popular fairytale, the Internet arrived and real conversation, interaction, identity slowly came to be displaced by the allure of the virtual — the simulated second life that uproots and disembodies the authentic self in favor of digital status-posturing, empty interaction, and addictive connection. This is supposedly the world we live in now, as a recent spate of popular books, essays, wellness guides, and viral content suggest. Yet they have hope: By casting off the virtual and re-embracing the tangible through disconnecting and undertaking a purifying “digital detox,” one can reconnect with the real, the meaningful — one’s true self that rejects social media’s seductive velvet cage.

That retelling may be a bit hyperbolic, but the cultural preoccupation is inescapable. How and when one looks at a glowing screen has generated its own pervasive popular discourse, with buzzwords like digital detox, disconnection, andunplugging to address profound concerns over who is still human, who is having true experiences, what is even “real” at all. A few examples: In 2013, Paul Miller of tech-news website The Verge and Baratunde Thurston, a Fast Company columnist, undertook highly publicized breaks from the Web that they described in intimate detail (and ultimately posted on the Web). Videos like “I Forgot My Phone” that depict smartphone users as mindless zombies missing out on reality have gone viral, and countless editorial writers feel compelled to moralize broadly about the minutia of when one checks their phone. But what they are saying may matter less than the fact that they feel required to say it. As Diane Lewis states in an essay for Flow, an online journal about new media,

The question of who adjudicates the distinction between fantasy and reality, and how, is perhaps at the crux of moral panics over immoderate media consumption.

It is worth asking why these self-appointed judges have emerged, why this moral preoccupation with immoderate digital connection is so popular, and how this mode of connection came to demand such assessment and confession, at such great length and detail. This concern-and-confess genre frames digital connection as something personally debasing, socially unnatural despite the rapidity with which it has been adopted. It’s depicted as a dangerous desire, an unhealthy pleasure, an addictive toxin to be regulated and medicated. That we’d be concerned with how to best use (or not use) a phone or a social service or any new technological development is of course to be expected, but the way the concern with digital connection has manifested itself in such profoundly heavy-handed ways suggests in the aggregate something more significant is happening, to make so many of us feel as though our integrity as humans has suddenly been placed at risk.

+++

The conflict between the self as social performance and the self as authentic expression of one’s inner truth has roots much deeper than social media. It has been a concern of much theorizing about modernity and, if you agree with these theories, a mostly unspoken preoccupation throughout modern culture.

Whether it’s Max Weber on rationalization, Walter Benjamin on aura, Jacques Ellul on technique, Jean Baudrillard on simulations, or Zygmunt Bauman and the Frankfurt School on modernity and the Enlightenment, there has been a long tradition of social theory linking the consequences of altering the “natural” world in the name of convenience, efficiency, comfort, and safety to draining reality of its truth or essence. We are increasingly asked to make various “bargains with modernity” (to use Anthony Giddens’s phrase) when encountering and depending on technologies we can’t fully comprehend. The globalization of countless cultural dispositions had replaced the pre-modern experience of cultural order with an anomic, driftless lack of understanding, as described by such classical sociologists as Émile Durkheim and Georg Simmel and in more contemporary accounts by David Riesman (The Lonely Crowd), Robert Putnam (Bowling Alone), and Sherry Turkle (Alone Together).

I drop all these names merely to suggest the depth of modern concern over technology replacing the real with something unnatural, the death of absolute truth, of God. This is especially the case in identity theory, much of which is founded on the tension between seeing the self as having some essential soul-like essence versus its being a product of social construction and scripted performance. From Martin Heidegger’s “they-self,” Charles Horton Cooley’s “looking glass self,” George Herbert Mead’s discussion of the “I” and the “me,”  Erving Goffman’s dramaturgical framework of self-presentation on the “front stage,” Michel Foucault’s “arts of existence,” to Judith Butler’s discussion of identity “performativity,” theories of the self and identity have long recognized the tension between the real and the pose. While so often attributed to social media, such status-posturing performance — “success theater” — is fundamental to the existence of identity.

These theories also share an understanding that people in Western society are generally uncomfortable admitting that who they are might be partly, or perhaps deeply, structured and performed. To be a “poser” is an insult; instead common wisdom is “be true to yourself,” which assumes there is a truth of your self. Digital-austerity discourse has tapped into this deep, subconscious modern tension, and brings to it the false hope that unplugging can bring catharsis.

The disconnectionists see the Internet as having normalized, perhaps even enforced, an unprecedented repression of the authentic self in favor of calculated avatar performance. If we could only pull ourselves away from screens and stop trading the real for the simulated, we would reconnect with our deeper truth. In describing his year away from the Internet, Paul Miller writes,

‘Real life,’ perhaps, was waiting for me on the other side of the web browser … It seemed then, in those first few months, that my hypothesis was right. The internet had held me back from my true self, the better Paul. I had pulled the plug and found the light.

Baratunde Thurston writes,

my first week sans social media was deeply, happily, and personally social […] I bought a new pair of glasses and shared my new face with the real people I spent time with.

Such rhetoric is common. Op-eds, magazine articles, news programs, and everyday discussion frames logging off as reclaiming real social interaction with your realself and other real people. The R in IRL. When the digital is misunderstood as exclusively “virtual,” then pushing back against the ubiquity of connection feels like a courageous re-embarking into the wilderness of reality. When identity performance can be regarded as a by-product of social media, then we have a new solution to the old problem of authenticity: just quit. Unplug — your humanity is at stake! Click-bait and self-congratulation in one logical flaw.

The degree to which inauthenticity seems a new, technological problem is the degree to which I can sell you an easy solution. Reducing the complexity of authenticity to something as simple as one’s degree of digital connection affords a solution the self-help industry can sell. Researcher Laura Portwood-Stacer describes this as that old “neoliberal responsibilization we’ve seen in so many other areas of ‘ethical consumption,’ ” turning social problems into personal ones with market solutions and fancy packaging.

Social media surely change identity performance. For one, it makes the process more explicit. The fate of having to live “onstage,” aware of being an object in others’ eyes rather than a special snowflake of spontaneous, uncalculated bursts of essential essence is more obvious than ever — even perhaps for those already highly conscious of such objectification. But that shouldn’t blind us to the fact that identity theater is older than Zuckerberg and doesn’t end when you log off. The most obvious problem with grasping at authenticity is that you’ll never catch it, which makes the social media confessional both inevitable as well as its own kind of predictable performance.

To his credit, Miller came to recognize by the end of his year away from the Internet that digital abstinence made him no more real than he always had been. Despite his great ascetic effort, he could not reach escape velocity from the Internet. Instead he found an “inextricable link” between life online and off, between flesh and data, imploding these digital dualisms into a new starting point that recognizes one is never entirely connected or disconnected but deeply both. Calling the digital performed and virtual to shore up the perceived reality of what is “offline” is one more strategy to renew the reification of old social categories like the self, gender, sexuality, race and other fictions made concrete. The more we argue that digital connection threatens the self, the more durable the concept of the self becomes.

+++

The obsession with authenticity has at its root a desire to delineate the “normal” and enforce a form of “healthy” founded in supposed truth. As such, it should be no surprise that digital-austerity discourse grows a thin layer of medical pathologization. That is, digital connection has become an illness. Not only has the American Psychiatric Association looked into making “Internet-use disorder” a DSM-official condition, but more influentially, the disconnectionists have framed unplugging as a health issue, touting the so-called digital detox. For example, so far in 2013, The Huffington Post has run 25 articles tagged with “digital detox,” including “The Amazing Discovery I Made When My Phone Died,” “How a Weekly Digital Detox Changed My Life,” “Why We’re So Hooked on Technology (And How to Unplug).” A Los Angeles Times article explored whether the presence of digital devices “contaminates the purity” of Burning Man. Digital detox has even been added to the Oxford Dictionary Online. Most famous, due to significant press coverage, is Camp Grounded, which bills itself as a “digital detox tech-free personal wellness retreat.” Atlantic senior editor Alexis Madrigal has called it “a pure distillation of post-modern technoanxiety.” On its grounds the camp bans not just electronic devices but also real names, real ages, and any talk about one’s work. Instead, the camp has laughing contests.

The wellness framework inherently pathologizes digital connection as contamination, something one must confess, carefully manage, or purify away entirely. Remembering Michel Foucault’s point that diagnosing what is ill is always equally about enforcing what is healthy, we might ask what new flavor of normal is being constructed by designating certain kinds of digital connection as a sickness. Similar to madness, delinquency, sexuality, or any of the other areas whose pathologizing toward normalization Foucault traced, digitality — what is “online,” and how should one appropriately engage that distinction — has become a productive concept around which to organize the control and management of new desires and pleasures. The desire to be heard, seen, informed via digital connection in all its pleasurable and distressing, dangerous and exciting ways comes to be framed as unhealthy, requiring internal and external policing. Both the real/virtual and toxic/healthy dichotomies of digital austerity discourse point toward a new type of organization and regulation of pleasure, a new imposition of personal techno-responsibility, especially on those who lack autonomy over how and when to use technology. It’s no accident that the focus in the viral “I Forgot My Phone” video wasn’t on the many people distracted by seductive digital information but the woman who forgets her phone, who is “free” to experience life — the healthy one is the object of control, not the zombies bitten by digitality.

The smartphone is a machine, but it is still deeply part of a network of blood; an embodied, intimate, fleshy portal that penetrates into one’s mind, into endless information, into other people. These stimulation machines produce a dense nexus of desires that is inherently threatening. Desire and pleasure always contain some possibility (a possibility — it’s by no means automatic or even likely) of disrupting the status quo. So there is always much at stake in their control, in attempts to funnel this desire away from progressive ends and toward reinforcing the values that support what already exists. Silicon Valley has made the term “disruption” a joke, but there is little disagreement that the eruption of digitality does create new possibilities, for better or worse. Touting the virtue of austerity puts digital desire to work strictly in maintaining traditional understandings of what is natural, human, real, healthy, normal. The disconnectionists establish a new set of taboos as a way to garner distinction at the expense of others, setting their authentic resistance against others’ unhealthy and inauthentic being.

This explains the abundance of confessions about social media compulsion that intimately detail when and how one connects. Desire can only be regulated if it is spoken about. To neutralize a desire, it must be made into a moral problem we are constantly aware of: Is it okay to look at a screen here? For how long? How bright can it be? How often can I look? Our orientation to digital connection needs to become a minor personal obsession. The true narcissism of social media isn’t self-love but instead our collective preoccupation with regulating these rituals of connectivity. Digital austerity is a police officer downloaded into our heads, making us always self-aware of our personal relationship to digital desire.

Of course, digital devices shouldn’t be excused from the moral order — nothing should or could be. But too often discussions about technology use are conducted in bad faith, particularly when the detoxers and disconnectionists and digital-etiquette-police seem more interested in discussing the trivial differences of when and how one looks at the screen rather than the larger moral quandaries of what one is doing with the screen. But the disconnectionists’ selfie-help has little to do with technology and more to do with enforcing a traditional vision of the natural, healthy, and normal. Disconnect. Take breaks. Unplug all you want. You’ll have different experiences and enjoy them, but you won’t be any more healthy or real.

Link: Why Women Aren't Welcome on the Internet

“Ignore the barrage of violent threats and harassing messages that confront you online every day.” That’s what women are told. But these relentless messages are an assault on women’s careers, their psychological bandwidth, and their freedom to live online. We have been thinking about Internet harassment all wrong.

[…] The examples are too numerous to recount, but like any good journalist, I keep a running file documenting the most deranged cases. There was the local cable viewer who hunted down my email address after a television appearance to tell me I was “the ugliest woman he had ever seen.” And the group of visitors to a “men’s rights” site who pored over photographs of me and a prominent feminist activist, then discussed how they’d “spend the night with” us. (“Put em both in a gimp mask and tied to each other 69 so the bitches can’t talk or move and go round the world, any old port in a storm, any old hole,” one decided.) And the anonymous commenter who weighed in on one of my articles: “Amanda, I’ll fucking rape you. How does that feel?”

None of this makes me exceptional. It just makes me a woman with an Internet connection. Here’s just a sampling of the noxious online commentary directed at other women in recent years. To Alyssa Royse, a sex and relationships blogger, for saying that she hated The Dark Knight: “you are clearly retarded, i hope someone shoots then rapes you.” To Kathy Sierra, a technology writer, for blogging about software, coding, and design: “i hope someone slits your throat and cums down your gob.” To Lindy West, a writer at the women’s website Jezebel, for critiquing a comedian’s rape joke: “I just want to rape her with a traffic cone.” To Rebecca Watson, an atheist commentator, for blogging about sexism in the skeptic community: “If I lived in Boston I’d put a bullet in your brain.” To Catherine Mayer, a journalist at Time magazine, for no particular reason: “A BOMB HAS BEEN PLACED OUTSIDE YOUR HOME. IT WILL GO OFF AT EXACTLY 10:47 PM ON A TIMER AND TRIGGER DESTROYING EVERYTHING.”

A woman doesn’t even need to occupy a professional writing perch at a prominent platform to become a target. According to a 2005 report by the Pew Research Center, which has been tracking the online lives of Americans for more than a decade, women and men have been logging on in equal numbers since 2000, but the vilest communications are still disproportionately lobbed at women. We are more likely to report being stalked and harassed on the Internet—of the 3,787 people who reported harassing incidents from 2000 to 2012 to the volunteer organizationWorking to Halt Online Abuse, 72.5 percent were female. Sometimes, the abuse can get physical: A Pew survey reported that five percent of women who used the Internet said “something happened online” that led them into “physical danger.” And it starts young: Teenage girls are significantly more likely to be cyberbullied than boys. Just appearing as a woman online, it seems, can be enough to inspire abuse. In 2006, researchers from the University of Maryland set up a bunch of fake online accounts and then dispatched them into chat rooms. Accounts with feminine usernames incurred an average of 100 sexually explicit or threatening messages a day. Masculine names received 3.7.

There are three federal laws that apply to cyberstalking cases; the first was passed in 1934 to address harassment through the mail, via telegram, and over the telephone, six decades after Alexander Graham Bell’s invention. Since the initial passage of the Violence Against Women Act, in 1994, amendments to the law have gradually updated it to apply to new technologies and to stiffen penalties against those who use them to abuse. Thirty-four states have cyberstalking laws on the books; most have expanded long-standing laws against stalking and criminal threats to prosecute crimes carried out online.

But making quick and sick threats has become so easy that many say the abuse has proliferated to the point of meaninglessness, and that expressing alarm is foolish. Reporters who take death threats seriously “often give the impression that this is some kind of shocking event for which we should pity the ‘victims,’” my colleague Jim Pagels wrote in Slate this fall, “but anyone who’s spent 10 minutes online knows that these assertions are entirely toothless.” On Twitter, he added, “When there’s no precedent for physical harm, it’s only baseless fear mongering.” My friend Jen Doll wrote, at The Atlantic Wire, “It seems like that old ‘ignoring’ tactic your mom taught you could work out to everyone’s benefit…. These people are bullying, or hope to bully. Which means we shouldn’t take the bait.” In the epilogue to her book The End of Men, Hanna Rosin—an editor at Slate—argued that harassment of women online could be seen as a cause for celebration. It shows just how far we’ve come. Many women on the Internet “are in positions of influence, widely published and widely read; if they sniff out misogyny, I have no doubt they will gleefully skewer the responsible sexist in one of many available online outlets, and get results.”

So women who are harassed online are expected to either get over ourselves or feel flattered in response to the threats made against us. We have the choice to keep quiet or respond “gleefully.”

But no matter how hard we attempt to ignore it, this type of gendered harassment—and the sheer volume of it—has severe implications for women’s status on the Internet. Threats of rape, death, and stalking can overpower our emotional bandwidth, take up our time, and cost us money through legal fees, online protection services, and missed wages. I’ve spent countless hours over the past four years logging the online activity of one particularly committed cyberstalker, just in case. And as the Internet becomes increasingly central to the human experience, the ability of women to live and work freely online will be shaped, and too often limited, by the technology companies that host these threats, the constellation of local and federal law enforcement officers who investigate them, and the popular commentators who dismiss them—all arenas that remain dominated by men, many of whom have little personal understanding of what women face online every day.

+++

This summer, Caroline Criado-Perez became the English-speaking Internet’s most famous recipient of online threats after she petitioned the British government to put more female faces on its bank notes. (When the Bank of England announced its intentions to replace social reformer Elizabeth Fry with Winston Churchill on the £5 note, Criado-Perez made the modest suggestion that the bank make an effort to feature at least one woman who is not the Queen on any of its currency.) Rape and death threats amassed on her Twitter feed too quickly to count, bearing messages like “I will rape you tomorrow at 9 p.m … Shall we meet near your house?”

Then, something interesting happened. Instead of logging off, Criado-Perez retweeted the threats, blasting them out to her Twitter followers. She called up police and hounded Twitter for a response. Journalists around the world started writing about the threats. As more and more people heard the story, Criado-Perez’sfollower count skyrocketed to near 25,000. Her supporters joined in urging British police and Twitter executives to respond.

Under the glare of international criticism, the police and the company spent the next few weeks passing the buck back and forth. Andy Trotter, a communications adviser for the British police, announced that it was Twitter’s responsibility to crack down on the messages. Though Britain criminalizes a broader category of offensive speech than the U.S. does, the sheer volume of threats would be too difficult for “a hard-pressed police service” to investigate, Trotter said. Police “don’t want to be in this arena.” It diverts their attention from “dealing with something else.”

Meanwhile, Twitter issued a blanket statement saying that victims like Criado-Perez could fill out an online form for each abusive tweet; when Criado-Perez supporters hounded Mark Luckie, the company’s manager of journalism and news, for a response, he briefly shielded his account, saying that the attention had become “abusive.” Twitter’s official recommendation to victims of abuse puts the ball squarely in law enforcement’s court: “If an interaction has gone beyond the point of name calling and you feel as though you may be in danger,” it says, “contact your local authorities so they can accurately assess the validity of the threat and help you resolve the issue offline.”

In the weeks after the flare-up, Scotland Yard confirmed the arrest of three men. Twitter—in response to several online petitions calling for action—hastened the rollout of a “report abuse” button that allows users to flag offensive material. And Criado-Perez went on receiving threats. Some real person out there—or rather, hundreds of them—still liked the idea of seeing her raped and killed.

+++

The Internet is a global network, but when you pick up the phone to report an online threat, whether you are in London or Palm Springs, you end up face-to-face with a cop who patrols a comparatively puny jurisdiction. And your cop will probably be a man: According to the U.S. Bureau of Justice Statistics, in 2008, only 6.5 percent of state police officers and 19 percent of FBI agents were women. The numbers get smaller in smaller agencies. And in many locales, police work is still a largely analog affair: 911 calls are immediately routed to the local police force; the closest officer is dispatched to respond; he takes notes with pen and paper.

After Criado-Perez received her hundreds of threats, she says she got conflicting instructions from police on how to report the crimes, and was forced to repeatedly “trawl” through the vile messages to preserve the evidence. “I can just about cope with threats,” she wrote on Twitter. “What I can’t cope with after that is the victim-blaming, the patronising, and the police record-keeping.” Last year, the American atheist blogger Rebecca Watson wrote about her experience calling a series of local and national law enforcement agencies after a man launched a website threatening to kill her. “Because I knew what town [he] lived in, I called his local police department. They told me there was nothing they could do and that I’d have to make a report with my local police department,” Watson wrote later. “[I] finally got through to someone who told me that there was nothing they could do but take a report in case one day [he] followed through on his threats, at which point they’d have a pretty good lead.”

The first time I reported an online rape threat to police, in 2009, the officer dispatched to my home asked, “Why would anyone bother to do something like that?” and declined to file a report. In Palm Springs, the officer who came to my room said, “This guy could be sitting in a basement in Nebraska for all we know.” That my stalker had said that he lived in my state, and had plans to seek me out at home, was dismissed as just another online ruse.

Link: Evgeny Morozov: Texting Toward Utopia

Does the Internet spread democracy?

In 1989 Ronald Reagan proclaimed that “The Goliath of totalitarianism will be brought down by the David of the microchip”; later, Bill Clinton compared Internet censorship to “trying to nail Jell–O to the wall”; and in 1999 George W. Bush (not John Lennon) asked us to “imagine if the Internet took hold in China. Imagine how freedom would spread.”

Such starry–eyed cyber–optimism suggested a new form of technological determinism according to which the Internet would be the hammer to nail all global problems, from economic development in Africa to threats of transnational terrorism in the Middle East. Even so shrewd an operator as Rupert Murdoch yielded to the digital temptation: “Advances in the technology of telecommunications have proved an unambiguous threat to totalitarian regimes everywhere,” he claimed. Soon after, Murdoch bowed down to the Chinese authorities, who threatened his regional satellite TV business in response to this headline–grabbing statement.

Some analysts did not jump on the bandwagon. The restrained tone of one 2003 report stood in marked contrast to prevailing cyber–optimism. The Carnegie Endowment for International Peace’s, “Open Networks, Closed Regimes: The Impact of the Internet on Authoritarian Rule,” warned: “Rather than sounding the death knell for authoritarianism, the global diffusion of the Internet presents both opportunity and challenge for authoritarian regimes.” Surveying diverse regimes from Singapore to Cuba, the report concluded that the political impact of the Internet would vary with a country’s social and economic circumstances, its political culture, and the peculiarities of its national Internet infrastructure.

Carnegie’s report appeared in the pre–YouTube, –Facebook, –MySpace darkness, so it was easy to overlook the rapidly falling costs of self–publishing and coordination and the implications for online interaction and collaboration, from political networking to Wikipedia. Still harder was to predict the potential effect of the Internet and mobile technology on economic development in the world’s poorest regions, where they currently provide much–needed banking infrastructure (for example, by using unspent air credit on mobile phones as currency), create new markets, introduce educational opportunities, and help to spread information about prevention and treatment of diseases. And hopes remain that the fruits of faster economic development, born of new information technologies, might also be good for democracy.

It is thus tempting to embrace the earlier cyber–optimism, trace the success of many political and democratic initiatives around the globe to the coming of Web 2.0, and dismiss the misgivings of the Carnegie report. Could it be that changes in the Web over the past six years—especially the rise of social networking, blogging, and video and photo sharing—represent the flowering of the Internet’s democratizing potential? This thesis seems to explain the dynamics of current Internet censorship: sites that feature user–generated content—Facebook, YouTube, Blogger—are especially unpopular with authoritarian regimes. A number of academic and popular books on the subject point to nothing short of a revolution, both in politics and information (see, for example, Antony Loewenstein’s The Blogging Revolution or Elizabeth Hanson’s The Information Revolution and World Politics, both published last year). Were the cyber–optimists right after all? Does the Internet spread freedom?

The answer to this question substantially depends on how we measure “freedom.” It is safe to say that the Internet has significantly changed the flow of information in and out of authoritarian states. While Internet censorship remains a thorny issue and, unfortunately, more widespread than it was in 2003, it is hard to ignore the wealth of digital content that has suddenly become available to millions of Chinese, Iranians, or Egyptians. If anything the speed and ease of Internet publishing have made many previous modes of samizdat obsolete; the emerging generation of dissidents may as well choose Facebook and YouTube as their headquarters and iTunes and Wikipedia as their classrooms.

Many such dissenters have, indeed, made great use of the Web. In Ukraine young activists relied on new–media technologies to mobilize supporters during the Orange Revolution. Colombian protesters used Facebook to organize massive rallies against FARC, the leftist guerrillas. The shocking and powerful pictures that surfaced from Burma during the 2007 anti–government protests—many of them shot by local bloggers with cell phones—quickly traveled around the globe. Democratic activists in Robert Mugabe’s Zimbabwe used the Web to track vote rigging in last year’s elections and used mobile phones to take photos of election results that were temporarily displayed outside the voting booths (later, a useful proof of the irregularities). Plenty of other examples—from Iran, Egypt, Russia, Belarus, and, above all, China—attest to the growing importance of technology in facilitating dissent.

Regime change by text messaging may seem realistic in cyberspace, but no dictators have been toppled via Second Life.

But drawing conclusions about the democratizing nature of the Internet may still be premature. The major challenge in understanding the relationship between democracy and the Internet— aside from developing good measures of democratic improvement—has been to distinguish cause and effect. That is always hard, but it is especially difficult in this case because the grandiose promise of technological determinism—the idealistic belief in the Internet’s transformative power—has often blinded even the most sober analysts.

Consider the arguments that ascribe Barack Obama’s electoral success, in part, to his team’s mastery of databases, online fundraising, and social networking. Obama’s use of new media is bound to be the subject of many articles and books. But to claim the primacy of technology over politics would be to disregard Obama’s larger–than–life charisma, the legacy of the stunningly unpopular Bush administration, the ramifications of the global financial crisis, and John McCain’s choice of Sarah Palin as a running mate. Despite the campaign’s considerable Web savvy, one cannot grant much legitimacy to the argument that it earned Obama his victory.

Yet, we are seemingly willing to resort to such technological determinism in the international context. For example, discussions of the Orange Revolution have assigned a particularly important role to text messaging. This is how a 2007 research paper, “The Role of Digital Networked Technologies in the Ukrainian Orange Revolution,” by Harvard’s Berkman Center for Internet and Society described the impact of text messaging, or SMS:

By September 2004, Pora [the opposition’s youth movement] had created a series of stable political networks throughout the country, including 150 mobile groups responsible for spreading information and coordinating election monitoring, with 72 regional centers and over 30,000 registered participants. Mobile phones played an important role for this mobile fleet of activists. Pora’s post–election report states, ‘a system of immediate dissemination of information by SMS was put in place and proved to be important.’

Such mobilization may indeed have been important in the final effort. But it is misleading to imply, as some recent studies by Berkman staff have, that the Orange Revolution was the work of as a “smart mob”—a term introduced by the critic Howard Rheingold to describe self–structuring and emerging social organization facilitated by technology. To focus so singularly on the technology is to gloss over the brutal attempts to falsify the results of the presidential elections that triggered the protests, the two weeks that protesters spent standing in the freezing November air, or the millions of dollars pumped into the Ukrainan democratic forces to make those protests happen in the first place. Regime change by text messaging may seem realistic in cyberspace, but no dictators have been toppled via Second Life, and no real elections have been won there either; otherwise, Ron Paul would be president.

To be sure, technology has a role in global causes. In addition to the tools of direct communication and collaboration now available, the proliferation of geospatial data and cheap and accessible satellite imagery, along with the arrival of user–friendly browsers like Google Earth, has fundamentally transformed the work of specialized NGOs; helped to start many new ones; and allowed, for example, real–life tracking of deforestation and illegal logging. Even indigenous populations previously shut off from technological innovations have taken advantage of online tools.

More importantly, the tectonic shifts in the economics of activism have allowed large numbers of unaffiliated individual activists (some of them toiling part–time or even freelancing) to contribute to numerous efforts. As Clay Shirky argues in Here Comes Everybody: Organizing Without Organizations, the new generation of protests is much more ad–hoc, spontaneous, and instantaneous (another allusion to Rheingold’s “smart mobs”). Technology enables groups to capitalize on different levels of engagement among activists. Operating on Wikipedia’s every–comma–counts ethos, it has finally become possible to harvest the energy of both active and passive contributors. Now, even a forwarded email counts. Such “nano–activism” matters in the aggregate.

So the Internet is making group and individual action cheaper, faster, leaner. But logistics are not the only determinant of civic engagement. What is the impact of the Internet on our incentives to act? This question is particularly important in the context of authoritarian states, where elections and opportunities for spontaneous, collective action are rare. The answer depends, to a large extent, on whether the Internet fosters an eagerness to act on newly acquired information. Whether the Internet augments or dampens this eagerness is both critical and undetermined.

Link: Don’t Be a Stranger

"Online venues that encourage strangers to form lasting friendships are dying out."

[…] When someone asks me how I know someone and I say “the Internet,” there is often a subtle pause, as if I had revealed we’d met through a benign but vaguely kinky hobby, like glassblowing class, maybe. The first generation of digital natives are coming of age, but two strangers meeting online is still suspicious (with the exception of dating sites, whose bare utility has blunted most stigma). What’s more, online venues that encourage strangers to form lasting friendships are dying out. Forums and emailing are being replaced by Facebook, which was built on the premise that people would rather carefully populate their online life with just a handful of “real” friends and shut out all the trolls, stalkers, and scammers. Now that distrust of online strangers is embedded in the code of our most popular social network, it is becoming increasingly unlikely for people to interact with anyone online they don’t already know.

Some might be relieved. The online stranger is the great boogeyman of the information age; in the mid-2000s, media reports might have had you believe that MySpace was essentially an easily-searchable catalogue of fresh victims for serial killers, rapists, cyberstalkers, and Tila Tequila. These days, we’re warned of “catfish” con artists who create attractive fake online personae and begin relationships with strangers to satisfy some sociopathic emotional need. The term comes from the documentary Catfish and the new MTV reality show of the same name.

The technopanics over online strangers haunting the early social web were propelled by straight-up fear of unknown technology. Catfish shows that the fear hasn’t vanished with social media’s ubiquity, it’s just become as banal as the technology itself. Each episode follows squirrelly millennial filmmaker Nev Schulman as he introduces someone in real life to a close friend or lover they’ve only known online. Things usually don’t turn out as well as it did for me and Austin, to say the least. In the first episode, peppy Arkansas college student Sunny gushes to Schulman over her longtime Internet boyfriend, a male model and medical student named Jamison. They have never met or even video-chatted, but Sunny knows Jamison is The One.

“The chance of us meeting, and the connection we built is really something—once in a lifetime,” Sunny says. But when Schulman calls Jamison’s phone to get his side of the story it’s answered by someone who sounds like a middle-schooler pretending to be ten years older to buy beer at a gas station. Each detail of Jamison’s biography is more improbable than the last. The only surprise when Sunny and Schulman arrive at Jamison’s house in Alabama and learn that the chiseled male model she fell for is actually a sun-deprived young woman named Chelsea, is how completely remorseless Chelsea is about the whole thing.

But Catfish isn’t a cautionary tale about normal people being victimized by weirdos they meet on the Internet. By lowering the stakes from death or financial ruin to heartbreak, Catfish can blame the victim as well as the perpetrator. The hoaxes are so stupidly obvious from the beginning that it’s impossible to feel empathy for targets like Sunny. Who’s really “worse” in this situation: The lonely woman who pretends, poorly, to be a male model on the Internet, or the one who plows time and energy into such an obvious fraud? Catfish indicts the entire practice of online friendship as a depressing massively multiplayer online game in which the deranged entertain the deluded. Catfish is Jerry Springer for the social media age. Like the sad, bickering subjects of Springer’s show, Sunny and Jamison deserve each other.

Catfish has struck such a nerve because it combines old fears of Internet strangers with newer anxieties about the authenticity of online friendship. Recently, an army of op-ed writers and best-selling authors have argued that social media is degrading our real-life relationships. “Friendship is devolving from a relationship to a feeling,” wrote the cultural critic William Deresiewicz in 2009, “from something people share to something each of us hugs privately to ourselves in the loneliness of our electronic caves.” Catfish‘s excruciating climaxes dramatize this argument. We see what happens when people like Sunny treat online friendships as if they’re “real,” and the end result is not pretty, literally.

Today’s skepticism of online relationships would have dismayed the early theorists of the Internet. For them, the ability to communicate with anyone, anywhere, from the privacy of our “electronic caves” was a boon to human interaction. The computer scientist J.C.R. Licklider breathlessly foretold the Internet in a 1968 paper with Robert W. Taylor, “The Computer as a Communication Device”: He imagined that communication in the future would take place over a network of loosely-linked “online interactive communities.” But he also predicted that “life will be happier for the on-line individual, because those with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity.” The ability to associate online with those we find most stimulating would lead to truer bonds than real world relationships determined by arbitrary variables of proximity and social class.

Obviously, we do not today live in a wired utopia where, as Licklider predicted, “unemployment would disappear from the face of the earth forever,” since everyone would have a job maintaining the massive network. But if Licklider was too seduced by the transformative power of the Internet, today’s social media naysayers are as well. To the Death of Friendship crowd, the Internet is a poison goo that corrodes the bonds of true friendship through Facebook’s trivial status updates and boring pictures of pets and kids. While good at selling books and making compelling reality television, this argument misses the huge variety of experience available online. Keener critics understand that our discontent with Facebook can be traced back to the specific values that inform that site. “Everything in it is reduced to the size of its founder,” Zadie Smith writes of Facebook, “Poking, because that’s what shy boys do to girls they’re scared to talk to. Preoccupied with personal trivia, because Mark Zuckerberg thinks the exchange of personal trivia is what ‘friendship’ is.”

Instead of asking, “is Facebook making us lonely?” and aimlessly pondering Big Issues of narcissism, social disintegration, and happiness metrics, as in a recentAtlantic cover story, we should ask: What exactly is it about Facebook that makes people ask if it’s making us lonely? The answer is in Mark Zuckerberg’s mind; not Mark Zuckerberg the awkward college student, where Zadie Smith finds it, but Mark Zuckerberg the programmer. Everything wrong with Facebook, from its ham-fisted approach to privacy, to the underwhelming quality of Facebook friendship, stems from the fact that Facebook models human relations on what Mark Zuckerberg calls “The social graph.”

“The idea,” he’s said, “is that if you mapped out all the connections between people and the things they care about, it would form a graph that connects everyone together.”

Facebook kills Lidlicker’s dream of fluid “on-line interactive communities” by fixing us on the social graph as surely as our asses rest in our chairs in the real world. The social graph is human relationships modeled according to computer logic. There can be no unknowns on the social graph. In programming, an unknown value is also known as “garbage.” So Facebook requires real names and real identities. “I think anonymity on the Internet has to go away,” explained Randi Zuckerberg, Mark’s sister and Facebook’s former marketing director. No anonymity means no strangers. Catfish wouldn’t happen in Zuckerberg’s ideal Internet, but neither would mine and Austin’s serendipitous friendship. Friendship on Mark Zuckerberg’s Internet is reduced to trading pokes and likes with co-workers or old high school buddies.

“A computer is not really like us,” wrote Ellen Ullman, a decade before the age of social media. “It is a projection of a very small part of ourselves; that portion devoted to logic, order, rule and clarity.” These are not the values associated with a fulfilling friendship.

But what if a social network operated according to a logic as different from computer logic as an underground punk club is from a computer lab? Once upon a time this social network did exist, and it was called Makeoutclub.com. Nobody much talks about Makeoutclub.com these days, because in technology the only things that remain after the latest revolution changes everything all over again is the heroic myth of the champion’s victory (Facebook) and the loser’s cautionary tale (MySpace). Makoutclub didn’t win or lose; it barely played the game.

Makeoutclub was founded in 2000, four years before Facebook, and is sometimes referred to as the world’s first social network. It sprung from a different sort of DIY culture than the feel-good Northwest indie vibes of Urban Honking. Makeoutclub was populated by lonely emo and punk kids, founded by a neck-tattooed entrepreneur named Gibby Miller, out of his bedroom in Boston.

The warnings of social disintegration and virtual imprisonment sounded by today’s social media skeptics would have seemed absurd to the kids of Makeoutclub. They applied for their account and filled out the rudimentary profile in order to expand their identities beyond lonely real lives in disintegrating suburban sprawl and failing factory towns. Makeoutclub was electrified by the simultaneous realization of thousands of weirdos that they weren’t alone.

With Makeoutclub, journalist Andy Greenwald writes in his book Nothing Feels Good: Punk Rock, Teenagers, and Emo,

Kids in one-parking-lot towns had access not only to style (e.g., black, black glasses), but also what books, ideas, trends, and beliefs were worth buzzing about in the big cities. If, in the past, one wondered how the one-stoplight town in Kansas had somehow birthed a true-blue Smiths fan, now subculture was the same everywhere. Outcasts had a secret hideout. Makeoutclub.com was one-stop shopping for self-makers.

As the name would suggest, Makeoutclub was also an excellent place to hook up. But because it wasn’t explicitly a dating service, courtship on Makeoutclub was free of OKCupid’s mechanical numbness. Sex and love were natural fixations for a community of thousands of horny young people, not a programming challenge to be solved with sophisticated algorithms.

About three years before I met my funny friend Austin on Urban Honking in Portland, Austin met his wife on Makeoutclub.com. Austin told me he joined in 2001 when he was 21 years old, “because it was easy to do and increased my chance of meeting a cute girl I could date.” You could search users by location, which made it easy to find someone in your area. (On Facebook, it’s impossible to search for people without being guided to those you are most likely to already know; results are filtered according to the number of mutual friends you have.) Austin would randomly message interesting-seeming local women whenever he came back home from college and they’d go on dates that almost invariably ended in no making out. In the real world, Austin was awkward.

Makeoutclub brought people together with a Lickliderian common interest, but it didn’t produce a Lickliderian utopia. It was messy; crews with names like “Team Vegan and “Team Elitist Fucks” battled on the message board, and creeps haunted profiles. But since anyone could try to be an intriguing stranger, the anonymity bred a productive recklessness. One night, around 2004, Austin was browsing Makeoutclub when he found his future wife. By this time, he’d graduated college and moved to Norway on a fellowship, where he fell into a period of intense loneliness. He’d taken again to messaging random women on Makeoutclub to talk to, and that night he messaged Dana, a Canadian who had caught his eye because she was wearing an eye patch in her profile picture.

“I had recently made a random decision that if I met a girl with a patch over her eye, I would marry her,” Austin told me. “I don’t know why I made this decision, but at the time I was making lots of strange decisions.” He explained this to Dana in his first message to her. They joked over instant messenger for a few days, but after a while their contact trailed off.

Months later, after Austin had moved from Norway to New York City, he received a surprising instant message from Dana. It turned out that Dana had meant to message another friend with a similar screenname to Austin’s. They got to chatting again, and Dana said she’d soon be taking a trip to New York City to see the alt-cabaret group Rasputina play. Dana and Austin met up the night before she was supposed to return to Canada. They got along. Dana slept over at Austin’s apartment that night and missed her flight. When Dana got back to Canada they kept in touch, and within a few weeks, Austin asked her to marry her. Today, they’ve been married for over eight years.

Dana and Austin’s relationship, and mine and Austin’s friendship, shows the Licklider dream was not as naïve as it appears now at first glance. If you look to online communities outside of Facebook, strangers are forging real and complex friendships, despite the complaints of op-ed writers. Even today, I’ve met some of my best friends on Twitter, which is infinitely better at connecting strangers than Facebook. Unlike the almost gothic obsession of Catfish’s online lovers, these friendships aren’t exclusively online—we meet up sometimes to talk about the Internet in real life. They are not carried out in a delusional swoon, or by trivial status updates.

These are not brilliant Wordsworth-and-Coleridge type soul-meldings, but they are not some shadow of a “real” friendship. Internet friendship yields a connection that is selfconsciously pointless and pointed at the same time: Out of all of the millions of bullshitters on the World Wide Web, we somehow found each other, liked each other enough to bullshit together, and built our own Fortress of Bullshit. The majority of my interactions with online friends is perpetuating some injoke so arcane that nobody remembers how it started or what it actually means. Perhaps that proves the op-ed writers’ point, but this has been the pattern of my friendships since long before I first logged onto AOL, and I wouldn’t have it any other way.

Makeoutclub isn’t dead either, but it seems mired in nostalgia for its early days. This past December, Gibby Miller posted a picture he’d taken in 2000 to Makeoutclub’s forums — it was the splash image for its first winter. It’s a snowy picture of his Boston neighborhood twelve years ago, unremarkable except for the moment of time it represents.

“This picture more than any other brings me back to those days,” Miller wrote in the forum. “All ages shows were off the hook, ‘IRL’ meetups were considered totally weird and meeting someone online was unheard of, almost everyone had white belts and dyed black Vulcan cuts.”

At least the Vulcan cuts have gone out of style.

Link: Cyberspace and the Lonely Crowd

In this essay I have tried to elucidate a number of crucial theses from Guy Debord’s The Society of the Spectacle by reexamining them in view of conditions within the growing digital economy. I have also considered what the spectacle is not in the hope of avoiding the kind of oversimplification of Debord’s theory which is all too common.

The whole life of those societies in which modern conditions of production prevail presents itself as an immense accumulation of spectacles. All that once was directly lived has become mere representation. (Guy Debord,The Society of the Spectacle, thesis 1)

Originally published in Paris in 1967 as La Societé du spectacle, Debord’s text, a collection of 221 brief theses organized into nine chapters, is a Marxian aphoristic analysis of the conditions of life in the modern, industrialized world. Here “spectacular society” is arraigned in terms that are simultaneously poetic and precise: deceit, false consciousness, separation, unreality. Debord’s influence today is beyond dispute.

Upon revisiting this book I have been impressed by the immediacy of the theory. For Debord seemed to be describing the most intensively promoted phenomenon of this decade, the planet-wide network of existing and promised digital commodities, services and environments: cyberspace.

Cyberspace is supposed to be about interactivity, connectivity and community. Yet if cyberspace exemplifies the spectacle through the relationships which we will investigate here, it is not about connection at all — paradoxically, it is about separation.

The spectacle appears at once as society itself, as a part of society and as a means of unification. As a part of society, it is that sector where all attention, all consciousness, converges.

That this along with numerous other passages from The Society of the Spectacle seems to describe the imploding virtual world of digital communications is not a coincidence. But note well: the nature of the “unification” in question here is at the heart of Debord’s theory. He continues:

Being isolated — and precisely for that reason — this sector is the locus of illusion and false consciousness; the unity it imposes is merely the official language of generalized separation. (Thesis 3)

As we will see, within the spectacle, as within the regime of technology, cultural differences are made invisible and qualitative distinctions between data, information, knowledge and experience are lost or blurred beyond recognition. Our minds are separated from our bodies; in turn we are separated from each other, and from the non-technological world.

The Transformation of Knowledge

…the spectacle is by no means the outcome of a technical development perceived as natural; on the contrary, the society of the spectacle is a form that chooses its own technical content. (Thesis 24)

What can it mean to say the spectacle chooses its own content? The words of Jean François Lyotard offer some explanation. Lyotard is concerned with the transformation of knowledge through the changing operations of language, including the rise of computer languages. In his book The Postmodern Condition, he discusses ways in which the proliferation of information-processing machines will profoundly affect the circulation of learning. He writes:

The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language.

Editing, in any medium, has always been a valorizing process with aesthetic as well as practical costs and benefits. But this is something different. There is an inevitable and incalculable loss of context and connotation involved in getting objects “into the computer,” not to mention the purely technical thresholds of information density (resolution, throughput, bandwidth, etc.).

Contrary to our technocratic wishful thinking, there are much deeper problems here than technical ones. For when all information is to be digitized, that which is not digitized will cease to have value, and that which is “on-line” will acquire a significance out of all proportion to its real meaning.

The spectacle manifests itself as an enormous positivity, out of reach and beyond dispute. All it says is: “Everything that appears is good; whatever is good will appear.” (Thesis 12)

Of course a transformation of this magnitude is not unprecedented, as we know from the examination of typographic and printing technology in Marshall McLuhan’s The Gutenberg Galaxy. Neither is it going unnoticed. Lest we forget that we are in the midst of a “revolution,” we are reminded of it daily by a thousand advertisers. But who can say what kind of distortion is taking place when all qualitative relationships are miraculously transformed into quantitative ones?

The Transformation of Ourselves

Though separated from his product, man is more and more, and ever more powerfully, the producer of every detail of his world. The closer life comes to being his own creation, the more drastically he is cut off from that life. (Thesis 33)

What is our role in this epistemological shift? Why are we allowing it, and how are we changed by it? For Jerry Mander, the extended use of technology involves an inevitable adaptation:

Humans who use cars sit in fixed positions for long hours following a strip of gray pavement, with eyes fixed forward, engaged in the task of driving. As long as they are driving, they are living within what we might call “roadform.” McLuhan told us that cars “extended” the human feet, but he put it the wrong way. Cars replaced human feet.

Following this logic, Allucquere Rosanne Stone has written a number of enthusiastic but cautionary investigations of “prosthetic” communications technology and its positive potential to decouple the gendered subject and the physical body. To find the most incisive answers, however, we must return to McLuhanand the myth of Narcissus. The word Narcissus, McLuhan tells us, comes from the Greek word narcosis, or numbness:

The youth Narcissus mistook his own reflection in the water for another person. This extension of himself by mirror numbed his perceptions until he became a servomechanism of his own extended or repeated image. The nymph Echo tried to win his love, but in vain. He was numb. He had adapted to his extension of himself and had become a closed system.

For the solipsist, there is no problem here: in this view, one cannot know anything other than the contents of one’s own mind or consciousness — the mind is always a closed system.

When we are enthralled in any immersive virtual environment, the body seems to become mere baggage (or “meat”). Any synthetic illusion which is sufficiently well resolved to convince or even confuse the senses can capture our undivided attention. So why should we not try to pack up and move in? If perception is constructed, then there is no reason to privilege the “real” — there is no “real” at all.

Suppose we allow that reality is not “an inherent property of the external world,” but instead is “largely an internally generated construct of the nervous system”. All the more reason, then, to recognize the principal operative condition of every synthetic environment: sensory deprivation. The relative poverty of any artificially generated experience seems quite evident when compared to a day spent in the country, our attention cast toward the infinity of events surrounding us.

It is the desire for immortality and for control, the kind of control and self-empowerment which we are denied in everyday life, which drives us. Virtual reality is not an antidote to the anaesthetizing built environment. It is simply a different formulation of the same drug.

The Promise of Total Connection

The spectacle … makes no secret of what it is, namely, hierarchical power evolving on its own, in its separateness, thanks to an increasing productivity based on an ever more refined division of labor, an ever greater comminution of machine-governed gestures, and an ever widening market. In the course of this development all community and critical awareness have ceased to be…. (Thesis 25)

Cybernetics, the transdisciplinary subject which gives its name to cyberspace, originated in the 1940s as the science of control and communication in the animal and the machine. It thus concerns itself with the flow of messages, and the problem of controlling this flow to ensure the proper functioning of some complex system, be it organic or artificial. So what happens when the system in question is a social system?

"Virtual community" is the latest in a series of oxymoronic expressions used to articulate the indispensibility of computers, which will allegedly unleash the forces to reconstitute mass society as the "public" once again. Of course the promise of a fully wired planet is not new, and we are all familiar with the basic connotations of McLuhan’s "global village." What is new is the feverish pitch of these claims that computers will return us to an ideal form of participatory democracy, a new "Athens without slaves."

Not everyone shares this New Age optimism. There are some dissenting voices even among the digerati (as the digital intelligensia are known). According to Larry Keeley, at a recent TED conference (Technology, Entertainment and Design), a number of attendees:

… disagreed that the Internet is, or ever could be, a true community. [Author Daniel] Boorstin observed that seeking brings us together and finding separates us. The Internet, which makes finding very easy, substitutes commonality of interests for shared long term goals.

Clearly the race to become wired is fueled by some anxiety. Just how far will it take us?

The Growth of the System

There can be no freedom apart from activity, and within the spectacle all activity is banned — a corollary of the fact that all real activity has been forcibly channeled into the global construction of the spectacle. So what is referred to as “liberation from work,” that is, increased leisure time, is a liberation neither within labor itself nor from the world labor has brought into being. (Thesis 27)

Why is the Internet, currently said to incorporate millions of computers and tens of millions of users, growing at a rate of 20% per month?

Knowledge is power, as the saying goes, and the concept of a 500 channel infobahn is has triggered the goldrush of the information age. There is a lot of liberal rhetoric about the need to circumvent a system of information haves and have-nots. Yet the Western economies are charging ahead surrounded by chronic workaholism and chronic unemployment — two sides of the same postindustrial coin.

The Society of the Spectacle is Not About Images

The spectacle cannot be understood either as a deliberate distortion of the visual world or as a product of the technology of the mass dissemination of images. It is far better viewed as a weltanschauung that has been actualized, translated into the material realm — a world view transformed into an objective force. (Thesis 5)

Until recently the Internet was largely a world of text; one writer called it the place where people “do the low ASCII dance.” (Low ASCII refers to the basic character set on American keyboards: upper and lower case letters, numbers, basic punctuation.) Yet there is no doubt that even now, in ever-increasing proportions, the Internet and virtually all other manifestations of cyberspace is carrying more than raw text. Images, sounds, compressed animations, entire radio shows and video sequences are already available over the net as digitized files. By definition cyber-space will come to represent data through spatial forms rather than purely alphanumeric ones.

However, this evolution is not pertinent to my argument. Neither am I claiming that the increasing commercialization of the net is the real threat, though this is as inevitable as it is regrettable. I suggest that the central issue is the problem of representation — in particular, computer-mediated communication — not the presence or absence of visual images.

More precisely, it has to do with reification.

… the spectacle’s job is to cause a world that is no longer directly perceptible to be seen via different specialized mediations…. It is the opposite of dialogue. Wherever representation takes on independent existence, the spectacle reestablishes its rule. (Thesis 18)

It’s About Capital, Stupid

The spectacle is not a collection of images; rather it is a social relationship between people that is mediated by images (Thesis 4)

The Society of the Spectacle is not about images. It’s about the manufacture of lack and the manipulation of desire. It’s about separation and isolation.

The telephone is a piece of technology that almost no one chooses to do without. It facilitates “communication.” Consider the phone sex advertisements in any major metropolitan centre. What are all these buyers and sellers looking for? In whose interest is this circulation of desire, labour and credit being orchestrated?

Isolation underpins technology, and technology isolates in its turn; all goods proposed by the spectacular system, from cars to televisions, also serve as the weapons for that system as it strives to reinforce the isolation of “the lonely crowd.” (Thesis 28)

Link: 'Rascal! Your name!': Schopenhauer vs the Internet trolls

For months – years even – I’ve been arguing that anonymous and pseudonymous comments have no place on the Internet.

I’m in no doubt that if we forced everyone who wanted to respond to a blog post or online article to use their real name, the Internet would be transformed. Overnight it would cease to be a cesspool of trollery and abuse, and would flourish instead as a veritable 18th century coffee house of "scientific education, literary and philosophical speculation, commercial innovation and political fermentation".

But it turns out I’m a couple of hundred years behind the curve. Karen Van Godtsenhoven – a reader of my blog – has just pointed me to a passage she discovered while reading Schopenhauer’s ‘ The Art Of Literature’. In it, Arthur S. discusses the evils of anonymity.

Fortunately, as Schopenhauer died in 1860 (and the translator, T. Bailey Saunders, followed him in 1928) I’m free to quote liberally from the text here. I think you’ll agree that, flowery language aside, the words could very easily have been written about today’s Internet trolls.

Schopenhauer writes…

"But, above all, anonymity, that shield of all literary rascality, would have to disappear. It was introduced under the pretext of protecting the honest critic, who warned the public, against the resentment of the author and his friends. But where there is one case of this sort, there will be a hundred where it merely serves to take all responsibility from the man who cannot stand by what he has said, or possibly to conceal the shame of one who has been cowardly and base enough to recommend a book to the public for the purpose of putting money into his own pocket. Often enough it is only a cloak for covering the obscurity, incompetence and insignificance of the critic. It is incredible what impudence these fellows will show, and what literary trickery they will venture to commit, as soon as they know they are safe under the shadow of anonymity.

Let me recommend a general Anti-criticism, a universal medicine or panacea, to put a stop to all anonymous reviewing, whether it praises the bad or blames the good: Rascal! Your name! For a man to wrap himself up and draw his hat over his face, and then fall upon people who are walking about without any disguise–this is not the part of a gentleman, it is the part of a scoundrel and a knave.

An anonymous review has no more authority than an anonymous letter; and one should be received with the same mistrust as the other. Or shall we take the name of the man who consents to preside over what is, in the strict sense of the word, une societe anonyme as a guarantee for the veracity of his colleagues?

Even Rousseau, in the preface to the Nouvelle Heloise, declares “tout honnete homme doit avouer les livres qu’il public”; which in plain language means that every honorable man ought to sign his articles, and that no one is honorable who does not do so. How much truer this is of polemical writing, which is the general character of reviews! Riemer was quite right in the opinion he gives in his Reminiscences of Goethe: An overt enemy, he says, “an enemy who meets you face to face, is an honorable man, who will treat you fairly, and with whom you can come to terms and be reconciled: but an enemy who conceals himself  is a base, cowardly scoundrel, who has not courage enough to avow his own judgment; it is not his opinion that he cares about, but only the secret pleasures of wreaking his anger without being found out or punished.” This will also have been Goethe’s opinion, as he was generally the source from which Riemer drew his observations. And, indeed, Rousseau’s maxim applies to every line that is printed. Would a man in a mask ever be allowed to harangue a mob, or speak in any assembly; and that, too, when he was going to attack others and overwhelm them with abuse?

Anonymity is the refuge for all literary and journalistic rascality. It is a practice which must be completely stopped. Every article, even in a newspaper, should be accompanied by the name of its author; and the editor should be made strictly responsible for the accuracy of the signature. The freedom of the press should be thus far restricted; so that when a man publicly proclaims through the far-sounding trumpet of the newspaper, he should be answerable for it, at any rate with his honor, if he has any; and if he has none, let his name neutralize the effect of his words. And since even the most insignificant person is known in his own circle, the result of such a measure would be to put an end to two-thirds of the newspaper lies, and to restrain the audacity of many a poisonous tongue.”

Amen to that.

Link: Evgeny Morozov: The Real Privacy Problem

As Web companies and government agencies analyze ever more information about our lives, it’s tempting to respond by passing new privacy laws or creating mechanisms that pay us for our data. Instead, we need a civic solution, because democracy is at risk.

In 1967, The Public Interest, then a leading venue for highbrow policy debate, published a provocative essay by Paul Baran, one of the fathers of the data transmission method known as packet switching. Titled “The Future Computer Utility,” the essay speculated that someday a few big, centralized computers would provide “information processing … the same way one now buys electricity.”

Our home computer console will be used to send and receive messages—like telegrams. We could check to see whether the local department store has the advertised sports shirt in stock in the desired color and size. We could ask when delivery would be guaranteed, if we ordered. The information would be up-to-the-minute and accurate. We could pay our bills and compute our taxes via the console. We would ask questions and receive answers from “information banks”—automated versions of today’s libraries. We would obtain up-to-the-minute listing of all television and radio programs … The computer could, itself, send a message to remind us of an impending anniversary and save us from the disastrous consequences of forgetfulness.

It took decades for cloud computing to fulfill Baran’s vision. But he was prescient enough to worry that utility computing would need its own regulatory model. Here was an employee of the RAND Corporation—hardly a redoubt of Marxist thought—fretting about the concentration of market power in the hands of large computer utilities and demanding state intervention. Baran also wanted policies that could “offer maximum protection to the preservation of the rights of privacy of information”:

Highly sensitive personal and important business information will be stored in many of the contemplated systems … At present, nothing more than trust—or, at best, a lack of technical sophistication—stands in the way of a would-be eavesdropper … Today we lack the mechanisms to insure adequate safeguards. Because of the difficulty in rebuilding complex systems to incorporate safeguards at a later date, it appears desirable to anticipate these problems.

Sharp, bullshit-free analysis: techno-futurism has been in decline ever since.

All the privacy solutions you hear about are on the wrong track.

To read Baran’s essay (just one of the many on utility computing published at the time) is to realize that our contemporary privacy problem is not contemporary. It’s not just a consequence of Mark Zuckerberg’s selling his soul and our profiles to the NSA. The problem was recognized early on, and little was done about it.

Almost all of Baran’s envisioned uses for “utility computing” are purely commercial. Ordering shirts, paying bills, looking for entertainment, conquering forgetfulness: this is not the Internet of “virtual communities” and “netizens.” Baran simply imagined that networked computing would allow us to do things that we already do without networked computing: shopping, entertainment, research. But also: espionage, surveillance, and voyeurism.

If Baran’s “computer revolution” doesn’t sound very revolutionary, it’s in part because he did not imagine that it would upend the foundations of capitalism and bureaucratic administration that had been in place for centuries. By the 1990s, however, many digital enthusiasts believed otherwise; they were convinced that the spread of digital networks and the rapid decline in communication costs represented a genuinely new stage in human development. For them, the surveillance triggered in the 2000s by 9/11 and the colonization of these pristine digital spaces by Google, Facebook, and big data were aberrations that could be resisted or at least reversed. If only we could now erase the decade we lost and return to the utopia of the 1980s and 1990s by passing stricter laws, giving users more control, and building better encryption tools!

A different reading of recent history would yield a different agenda for the future. The widespread feeling of emancipation through information that many people still attribute to the 1990s was probably just a prolonged hallucination. Both capitalism and bureaucratic administration easily accommodated themselves to the new digital regime; both thrive on information flows, the more automated the better. Laws, markets, or technologies won’t stymie or redirect that demand for data, as all three play a role in sustaining capitalism and bureaucratic administration in the first place. Something else is needed: politics.

Even programs that seem innocuous can undermine democracy.

First, let’s address the symptoms of our current malaise. Yes, the commercial interests of technology companies and the policy interests of government agencies have converged: both are interested in the collection and rapid analysis of user data. Google and Facebook are compelled to collect ever more data to boost the effectiveness of the ads they sell. Government agencies need the same data—they can collect it either on their own or in coöperation with technology companies—to pursue their own programs.

Many of those programs deal with national security. But such data can be used in many other ways that also undermine privacy. The Italian government, for example, is using a tool called the redditometro, or income meter, which analyzes receipts and spending patterns to flag people who spend more than they claim in income as potential tax cheaters. Once mobile payments replace a large percentage of cash transactions—with Google and Facebook as intermediaries—the data collected by these companies will be indispensable to tax collectors. Likewise, legal academics are busy exploring how data mining can be used to craft contracts or wills tailored to the personalities, characteristics, and past behavior of individual citizens, boosting efficiency and reducing malpractice.

On another front, technocrats like Cass Sunstein, the former administrator of the Office of Information and Regulatory Affairs at the White House and a leading proponent of “nanny statecraft” that nudges citizens to do certain things, hope that the collection and instant analysis of data about individuals can help solve problems like obesity, climate change, and drunk driving by steering our behavior. A new book by three British academics—Changing Behaviours: On the Rise of the Psychological State—features a long list of such schemes at work in the U.K., where the government’s nudging unit, inspired by Sunstein, has been so successful that it’s about to become a for-profit operation.

Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy, or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own. Citizens take on the role of information machines that feed the techno-bureaucratic complex with our data. And why wouldn’t we, if we are promised slimmer waistlines, cleaner air, or longer (and safer) lives in return?

This logic of preëmption is not different from that of the NSA in its fight against terror: let’s prevent problems rather than deal with their consequences. Even if we tie the hands of the NSA—by some combination of better oversight, stricter rules on data access, or stronger and friendlier encryption technologies—the data hunger of other state institutions would remain. They will justify it. On issues like obesity or climate change—where the policy makers are quick to add that we are facing a ticking-bomb scenario—they will say a little deficit of democracy can go a long way.

Here’s what that deficit would look like: the new digital infrastructure, thriving as it does on real-time data contributed by citizens, allows the technocrats to take politics, with all its noise, friction, and discontent, out of the political process. It replaces the messy stuff of coalition-building, bargaining, and deliberation with the cleanliness and efficiency of data-powered administration.

This phenomenon has a meme-friendly name: “algorithmic regulation,” as Silicon Valley publisher Tim O’Reilly calls it. In essence, information-rich democracies have reached a point where they want to try to solve public problems without having to explain or justify themselves to citizens. Instead, they can simply appeal to our own self-interest—and they know enough about us to engineer a perfect, highly personalized, irresistible nudge.

Privacy is a means to democracy, not an end in itself.

Another warning from the past. The year was 1985, and Spiros Simitis, Germany’s leading privacy scholar and practitioner—at the time the data protection commissioner of the German state of Hesse—was addressing the University of Pennsylvania Law School. His lecture explored the very same issue that preoccupied Baran: the automation of data processing. But Simitis didn’t lose sight of the history of capitalism and democracy, so he saw technological changes in a far more ambiguous light.

He also recognized that privacy is not an end in itself. It’s a means of achieving a certain ideal of democratic politics, where citizens are trusted to be more than just self-contented suppliers of information to all-seeing and all-optimizing technocrats. “Where privacy is dismantled,” warned Simitis, “both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.”

Three technological trends underpinned Simitis’s analysis. First, he noted, even back then, every sphere of social interaction was mediated by information technology—he warned of “the intensive retrieval of personal data of virtually every employee, taxpayer, patient, bank customer, welfare recipient, or car driver.” As a result, privacy was no longer solely a problem of some unlucky fellow caught off-guard in an awkward situation; it had become everyone’s problem. Second, new technologies like smart cards and videotex not only were making it possible to “record and reconstruct individual activities in minute detail” but also were normalizing surveillance, weaving it into our everyday life. Third, the personal information recorded by these new technologies was allowing social institutions to enforce standards of behavior, triggering “long-term strategies of manipulation intended to mold and adjust individual conduct.”

Modern institutions certainly stood to gain from all this. Insurance companies could tailor cost-saving programs to the needs and demands of patients, hospitals, and the pharmaceutical industry. Police could use newly available databases and various “mobility profiles” to identify potential criminals and locate suspects. Welfare agencies could suddenly unearth fraudulent behavior.

But how would these technologies affect us as citizens—as subjects who participate in understanding and reforming the world around us, not just as consumers or customers who merely benefit from it?

In case after case, Simitis argued, we stood to lose. Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation. As a result, “interactive systems … suggest individual activity where in fact no more than stereotyped reactions occur.”

If you think Simitis was describing a future that never came to pass, consider a recent paper on the transparency of automated prediction systems by Tal Zarsky, one of the world’s leading experts on the politics and ethics of data mining. He notes that “data mining might point to individuals and events, indicating elevated risk, without telling us why they were selected.” As it happens, the degree of interpretability is one of the most consequential policy decisions to be made in designing data-mining systems. Zarsky sees vast implications for democracy here:

A non-interpretable process might follow from a data-mining analysis which is not explainable in human language. Here, the software makes its selection decisions based upon multiple variables (even thousands) … It would be difficult for the government to provide a detailed response when asked why an individual was singled out to receive differentiated treatment by an automated recommendation system. The most the government could say is that this is what the algorithm found based on previous cases.

This is the future we are sleepwalking into. Everything seems to work, and things might even be getting better—it’s just that we don’t know exactly why or how.

Too little privacy can endanger democracy. But so can too much privacy.

Simitis got the trends right. Free from dubious assumptions about “the Internet age,” he arrived at an original but cautious defense of privacy as a vital feature of a self-critical democracy—not the democracy of some abstract political theory but the messy, noisy democracy we inhabit, with its never-ending contradictions. In particular, Simitis’s most crucial insight is that privacy can both support and undermine democracy.

Traditionally, our response to changes in automated information processing has been to view them as a personal problem for the affected individuals. A case in point is the seminal article “The Right to Privacy,” by Louis Brandeis and Samuel Warren. Writing in 1890, they sought a “right to be let alone”—to live an undisturbed life, away from intruders. According to Simitis, they expressed a desire, common to many self-made individuals at the time, “to enjoy, strictly for themselves and under conditions they determined, the fruits of their economic and social activity.”

A laudable goal: without extending such legal cover to entrepreneurs, modern American capitalism might have never become so robust. But this right, disconnected from any matching responsibilities, could also sanction an excessive level of withdrawal that shields us from the outside world and undermines the foundations of the very democratic regime that made the right possible. If all citizens were to fully exercise their right to privacy, society would be deprived of the transparent and readily available data that’s needed not only for the technocrats’ sake but—even more—so that citizens can evaluate issues, form opinions, and debate (and, occasionally, fire the technocrats).

This is not a problem specific to the right to privacy. For some contemporary thinkers, such as the French historian and philosopher Marcel Gauchet, democracies risk falling victim to their own success: having instituted a legal regime of rights that allow citizens to pursue their own private interests without any reference to what’s good for the public, they stand to exhaust the very resources that have allowed them to flourish.

When all citizens demand their rights but are unaware of their responsibilities, the political questions that have defined democratic life over centuries—How should we live together? What is in the public interest, and how do I balance my own interest with it?—are subsumed into legal, economic, or administrative domains. “The political” and “the public” no longer register as domains at all; laws, markets, and technologies displace debate and contestation as preferred, less messy solutions.

But a democracy without engaged citizens doesn’t sound much like a democracy—and might not survive as one. This was obvious to Thomas Jefferson, who, while wanting every citizen to be “a participator in the government of affairs,” also believed that civic participation involves a constant tension between public and private life. A society that believes, as Simitis put it, that the citizen’s access to information “ends where the bourgeois’ claim for privacy begins” won’t last as a well-functioning democracy.

Thus the balance between privacy and transparency is especially in need of adjustment in times of rapid technological change. That balance itself is a political issue par excellence, to be settled through public debate and always left open for negotiation. It can’t be settled once and for all by some combination of theories, markets, and technologies. As Simitis said: “Far from being considered a constitutive element of a democratic society, privacy appears as a tolerated contradiction, the implications of which must be continuously reconsidered.”

Laws and market mechanisms are insufficient solutions.

In the last few decades, as we began to generate more data, our institutions became addicted. If you withheld the data and severed the feedback loops, it’s not clear whether they could continue at all. We, as citizens, are caught in an odd position: our reason for disclosing the data is not that we feel deep concern for the public good. No, we release data out of self-interest, on Google or via self-tracking apps. We are too cheap not to use free services subsidized by advertising. Or we want to track our fitness and diet, and then we sell the data.

Simitis knew even in 1985 that this would inevitably lead to the “algorithmic regulation” taking shape today, as politics becomes “public administration” that runs on autopilot so that citizens can relax and enjoy themselves, only to be nudged, occasionally, whenever they are about to forget to buy broccoli.

Habits, activities, and preferences are compiled, registered, and retrieved to facilitate better adjustment, not to improve the individual’s capacity to act and to decide. Whatever the original incentive for computerization may have been, processing increasingly appears as the ideal means to adapt an individual to a predetermined, standardized behavior that aims at the highest possible degree of compliance with the model patient, consumer, taxpayer, employee, or citizen.

What Simitis is describing here is the construction of what I call “invisible barbed wire” around our intellectual and social lives. Big data, with its many interconnected databases that feed on information and algorithms of dubious provenance, imposes severe constraints on how we mature politically and socially. The German philosopher Jürgen Habermas was right to warn—in 1963—that “an exclusively technical civilization … is threatened … by the splitting of human beings into two classes—the social engineers and the inmates of closed social institutions.”

The invisible barbed wire of big data limits our lives to a space that might look quiet and enticing enough but is not of our own choosing and that we cannot rebuild or expand. The worst part is that we do not see it as such. Because we believe that we are free to go anywhere, the barbed wire remains invisible. Worse, there’s no one to blame: certainly not Google, Dick Cheney, or the NSA. It’s the result of many different logics and systems—of modern capitalism, of bureaucratic governance, of risk management—that get supercharged by the automation of information processing and by the depoliticization of politics.

The more information we reveal about ourselves, the denser but more invisible this barbed wire becomes. We gradually lose our capacity to reason and debate; we no longer understand why things happen to us.

But all is not lost. We could learn to perceive ourselves as trapped within this barbed wire and even cut through it. Privacy is the resource that allows us to do that and, should we be so lucky, even to plan our escape route.

This is where Simitis expressed a truly revolutionary insight that is lost in contemporary privacy debates: no progress can be achieved, he said, as long as privacy protection is “more or less equated with an individual’s right to decide when and which data are to be accessible.” The trap that many well-meaning privacy advocates fall into is thinking that if only they could provide the individual with more control over his or her data—through stronger laws or a robust property regime—then the invisible barbed wire would become visible and fray. It won’t—not if that data is eventually returned to the very institutions that are erecting the wire around us.

Think of privacy in ethical terms.

If we accept privacy as a problem of and for democracy, then popular fixes are inadequate. For example, in his book Who Owns the Future?, Jaron Lanier proposes that we disregard one pole of privacy—the legal one—and focus on the economic one instead. “Commercial rights are better suited for the multitude of quirky little situations that will come up in real life than new kinds of civil rights along the lines of digital privacy,” he writes. On this logic, by turning our data into an asset that we might sell, we accomplish two things. First, we can control who has access to it, and second, we can make up for some of the economic losses caused by the disruption of everything analog.

Lanier’s proposal is not original. In Code and Other Laws of Cyberspace (first published in 1999), Lawrence Lessig enthused about building a property regime around private data. Lessig wanted an “electronic butler” that could negotiate with websites: “The user sets her preferences once—specifies how she would negotiate privacy and what she is willing to give up—and from that moment on, when she enters a site, the site and her machine negotiate. Only if the machines can agree will the site be able to obtain her personal data.”

It’s easy to see where such reasoning could take us. We’d all have customized smartphone apps that would continually incorporate the latest information about the people we meet, the places we visit, and the information we possess in order to update the price of our personal data portfolio. It would be extremely dynamic: if you are walking by a fancy store selling jewelry, the store might be willing to pay more to know your spouse’s birthday than it is when you are sitting at home watching TV.

The property regime can, indeed, strengthen privacy: if consumers want a good return on their data portfolio, they need to ensure that their data is not already available elsewhere. Thus they either “rent” it the way Netflix rents movies or sell it on the condition that it can be used or resold only under tightly controlled conditions. Some companies already offer “data lockers” to facilitate such secure exchanges.

So if you want to defend the “right to privacy” for its own sake, turning data into a tradable asset could resolve your misgivings. The NSA would still get what it wanted; but if you’re worried that our private information has become too liquid and that we’ve lost control over its movements, a smart business model, coupled with a strong digital-rights­-management regime, could fix that.

Meanwhile, government agencies committed to “nanny statecraft” would want this data as well. Perhaps they might pay a small fee or promise a tax credit for the privilege of nudging you later on—with the help of the data from your smartphone. Consumers win, entrepreneurs win, technocrats win. Privacy, in one way or another, is preserved also. So who, exactly, loses here? If you’ve read your Simitis, you know the answer: democracy does.

It’s not just because the invisible barbed wire would remain. We also should worry about the implications for justice and equality. For example, my decision to disclose personal information, even if I disclose it only to my insurance company, will inevitably have implications for other people, many of them less well off. People who say that tracking their fitness or location is merely an affirmative choice from which they can opt out have little knowledge of how institutions think. Once there are enough early adopters who self-track—and most of them are likely to gain something from it—those who refuse will no longer be seen as just quirky individuals exercising their autonomy. No, they will be considered deviants with something to hide. Their insurance will be more expensive. If we never lose sight of this fact, our decision to self-track won’t be as easy to reduce to pure economic self-­interest; at some point, moral considerations might kick in. Do I really want to share my data and get a coupon I do not need if it means that someone else who is already working three jobs may ultimately have to pay more? Such moral concerns are rendered moot if we delegate decision-making to “electronic butlers.”

Few of us have had moral pangs about data-­sharing schemes, but that could change. Before the environment became a global concern, few of us thought twice about taking public transport if we could drive. Before ethical consumption became a global concern, no one would have paid more for coffee that tasted the same but promised “fair trade.” Consider a cheap T-shirt you see in a store. It might be perfectly legal to buy it, but after decades of hard work by activist groups, a “Made in Bangladesh” label makes us think twice about doing so. Perhaps we fear that it was made by children or exploited adults. Or, having thought about it, maybe we actually do want to buy the T-shirt because we hope it might support the work of a child who would otherwise be forced into prostitution. What is the right thing to do here? We don’t know—so we do some research. Such scrutiny can’t apply to everything we buy, or we’d never leave the store. But exchanges of information—the oxygen of democratic life—should fall into the category of “Apply more thought, not less.” It’s not something to be delegated to an “electronic butler”—not if we don’t want to cleanse our life of its political dimension.

Link: An interview with Christian Fuchs

Dr. Christian Fuchs is Professor of Social Media at the Communication and Media Research Institute and the Centre for Social Media Research, University of Westminster, London, UK. He is the author of “Internet and society: Social theory in the information age” (Routledge 2008), “Foundations of critical media and information studies” (Routledge 2011) and the forthcoming monographs “Digital labor and Karl Marx” (Routledge 2014), “Social media: A critical introduction” (Sage 2014) and “OccupyMedia! The Occupy movement and social media in crisis capitalism” (Zero Books 2014). He has co-edited the collected volume “Internet and surveillance: The challenges of web 2.0 and social media” (Routledge 2012) and the forthcoming volumes “Critique, social media and the information society” (Routledge 2014) and “Social media, politics and the state. Protests, revolutions, riots, crime and policing in the age of Facebook, Twitter and YouTube” (Routledge 2014). He is editor of “tripleC: Communication, Capitalism & Critique”, Chair of the European Sociological Association’s Research Network 18 – Sociology of Communications and Media Research, co-founder of the ICTs and Society network and Vice-Chair of the European Union COST Action “Dynamics of Virtual Work”. We met up with Christian in October in Athens, Greece during the COST action meeting and conference.

In your work you rely heavily on the writings of Karl Marx. Where do you see the relevance of this 19th century theorist in the 21st century?

I do not terribly like the way you phrased this question because somehow it gives the perception of Marx as being outdated, old, that society is new and has completely changed through neoliberalism and so on. This was the point made by Baudrillard who said that we cannot explain postmodern society through Marx because Marx is a 19th century theorist and he did not talk about the media and so on. I would however have suggested to Baudrillard that he should have read Marx more carefully because there is a lot in Marx that helps us understand the media within the context of society. Quite obviously there is a huge crisis of capitalism, of the state, imperialism and ideology. It is not only a financial crisis because it goes beyond the financial sector. In volume three of Capital Marx very thoroughly discussed the mechanisms of financialization. He also very closely analysed class and class relations and inequalities. Nobody can claim today that we are not living in a class society. The ruling class enforces austerity measures and we have deepening inequalities. So these are all social issues. If we look at the media side and the ICTs in this context the question is can Marx somehow help us? I think that Baudrillard and similarly minded people were and are very superficial readers of Marx because Marx even anticipated the information society in his claims about the development of technological productive forces, and that knowledge in production would become increasingly important. Some also say that Marx did not understand the networked media, but then again Marx for example analyses the telegraph and its importance for society and how technology impacts society in the context of the globalization of the economy and communication. I even claim in my forthcoming book “Social Media: A Critical Introduction” that Marx invented the Internet in a striking passage of the Grundrisse. He described in a very anticipatory manner that in the global information system people inform themselves about others and are creating connections to each other. So the idea of social networking is there and the idea of networked information and a hypertext of global information are already there. So actually the World Wide Web was not invented by Tim Berners-Lee but by Karl Marx in 1857. Of course the technological foundations did not exist and also the computer did not exist as technology. But I think that, conceptually, Marx did invent the internet.

Karl Marx was largely focused on labour as a basic human activity. How does his labour theory relate to contemporary media and communication processes? Where do you see the border between labour and play in contemporary social media environments?

There is an anthropological element that Marx stresses. How humans have differentiated themselves from animals and how society has become differentiated has to do with purposeful human activity and self-conscious thinking. What distinguishes a bee from an architect is that the architect always imagines the result of what he produces before he produces it. This anticipatory thinking is at the heart of all human work processes. Work takes its organisational forms through social relations within specific societal formations – for example in the capitalist mode of production and the capitalist mode of the organisation of society. Then the labour theory of value comes in. Some say this is vital for social media, some say we do not need this theory because it is completely outdated. There is a lot of misunderstanding about the labour theory of value. When I read articles about this topic I always look at the basic concepts used besides value and labour. A lot of people use the terms money and profit, not understanding that labour theory of value is a theory of time in society and the capitalist economy. The crucial thing about how Marx conceptualizes value is that there is a substance of value and a measure of value. Human labour power is the substance of value whereas labour time in specific spaces is the measure of value. The labour value is the average time it takes to produce a commodity. How does this relate to what is called social media? The claim that the labour theory of value is no longer valid implies that time plays no role in the contemporary capitalist economy. Attention and reputation can be accumulated and getting attention for social media does not happen simply by putting the information there – it requires the work of creating the attention. The groups on Facebook and Twitter with the largest number of followers and likes are the ones of entertainers and companies who employ people such as social media strategists to take care of their social media presence. So we need to conceptualize value with a theory of time. Therefore, I am interested in establishing theories of time in society, time in economy and media theory.

In his recent work Manuel Castells stated that the most fundamental form of power lies in the ability to shape the human mind. This may be easier to comprehend in the mass media environment where media content is shaped with a specific purpose to control and direct human behaviour, for example through advertising or political campaigns. However, with social media the users produce the content themselves. Where do you see this type of power exercised in the social media environment and how is it different from the mass media environment?

I will try to answer this question in the context of two dominant theories of how social media are being conceptualized: Castells´ theory of media and the network society and Henry Jenkins´ theory of participatory culture. I think both of these approaches are terribly flawed. Jenkins celebrates corporatist capitalist culture and how it is monetized. The concept of power from Castells is based on the Weber´s definition of power as a coercive force that exists everywhere. However there is also altruistic behaviour in our lives at home, with friends and elsewhere. There is life beyond domination. Of course we live in dominative societies but I believe in a sort of Enlightenment ideal of emancipation of society and that people can rule themselves. For me power means the ability of people to shape and control the structures of society. So power can be distributed in different forms. There are also different forms of power: economic power, decision-making power in politics, cultural power. The problem is that these forms of power are unequally distributed. Now here comes Jenkins who claims that culture has become participatory and we today all create culture in a democratic process. Of course, there are changes you cannot deny since it is easy to shoot a video on your mobile phone and put it on the internet. But does this mean that society becomes immediately democratized? I doubt it. Both Jenkins and Castells are technological determinists. Jenkins does not even realize where the concept of participation comes from in a theoretical sense and does not mention earlier forms and attempts of creating more participation such as the student movement’s vision of participatory democracy in the 1960s. Structures of control in the economy today and in the political system are based on power asymmetries. Although we produce information ourselves this does not mean that all people benefit from it to the same extent.

Recent surveillance scandals exposed by Edward Snowden have shown that the companies are not the only ones taking advantage of citizens´ digital footprints online. Do you see any alternatives to these events? How can we achieve a truly open and participatory internet taking all these risks into account?

The Prism scandal has shown that states have access to a lot of social media. However, we have to put this phenomenon in a broader context. What has emerged is a sort of surveillance-industrial complex where you have spy agencies conducting massive surveillance in collaboration with private companies. Facebook was involved, Skype, Apple and others. Snowden was also working for a private security company – Booz Allen – and the state outsourced surveillance to this private company and other ones. Security is a very profitable sector within the economy. We must also see the ideological context of these events that goes back to the post 9-11 situation. A spiral of war and violence was developing after these events and it was claimed that there is a technological fix to terrorism and organized crime and that there are terrorist and criminals everywhere around us. The suggested highly ideologically motivated solution was to introduce more surveillance technologies to prevent organized crime and terrorism. This was very one-dimensional and short sighted. What has developed in the online sphere is corporate and state control. From a liberal perspective this threatens the basic liberties we have or that we think we have in modern society. The question is how do we get out of this situation and what changes of the Internet and society do we need? We do have things like the Pirate party struggling for freedom of information, people concerned about privacy, critical journalists concerned about press freedom, the Occupy movement and so on. They all seem, however, terribly unconnected but in the time of crisis of the whole capitalist society their reactions, if combined in a network, would be a force for defending society and making it more democratic. A united political movement that would run for governments and parliaments could try to make reforms in society. We also need to reinvent and redesign the basic structures of the internet. However, we should not do away with social media because they do enable people to maintain their networks. But people do not like the aspects of control embedded in them. We need an internet controlled by civil society. If we think of how the media can be organized there are not just capitalist media but also public service media controlled by the state and alternative media controlled by civil society. The idea of an alternative internet purely controlled by the state might be dangerous, but we need state power to make progressive changes. I would like to see a combination of both state and civil society power in reforming the Internet and the media because there are interesting civil society projects that however face the problem of a lack of resources. For example, the Occupy movement had an alternative social medium they created. This was used by a certain minority within the movement. My study “OccupyMedia! The Occupy Movement and Social Media in Crisis Capitalism” shows that the corporate platforms were also popular among activists but that they were at the same time afraid they were monitored by the state and also worried that as digital workers they were exploited by Internet companies. We can only introduce changes by using already existing structures but the history of alternative media is unfortunately a history of voluntary, self-exploited and precarious work because of the lack of sources of income. So a media reform movement should also channel resources towards alternative projects. We need to tax media corporations more, we need to tax advertising, and corporations in general. Through participatory budgeting one could channel this money towards alternative media projects that are non-profit and so we could create a form of cooperation between the state and civil society that advances media reform. Voluntary donations such as the ones on Wikipedia are also a solution but are dependent on an unstable stream of resources.