Sunshine Recorder

Link: The Conservatism of Emoji

Emoji offer you new possibilities for digital expression, but only if you’re speaking their language.

If you smile through your fear and sorrow
Smile and maybe tomorrow
You’ll see the sun come shining through for you
—Nat King Cole, “Smile”

The world will soon have its first emoji-only social network: Emoj.li. This news, announced in late June, was met with a combination of scorn and amusement from the tech press. It was seen as another entry in the gimmick-social-network category, to be filed alongside Yo. Yet emoji have a rich and complex history behind the campy shtick: From the rise of the smiley in the second half of the 20th century, emoji emerged out of corporate strategies, copyright claims, and standards disputes to become a ubiquitous digital shorthand. And in their own, highly compressed lexicon, emoji are trying to tell us something about the nature of feelings, of labor, and the new horizons of capitalism. They are the signs of our times.

Innocuous and omnipresent, emoji are the social lubricant smoothing the rough edges of our digital lives: They underscore tone, introduce humor, and give us a quick way to bring personality into otherwise monochrome spaces. All this computerized work is, according to Michael Hardt, one face of what he terms immaterial labor, or “labor that produces an immaterial good, such as a service, knowledge, or communication.” “We increasingly think like computers,” he writes, but “the other face of immaterial labor is the affective labor of human conduct and interaction” — all those fast-food greetings, the casual banter with the Uber driver, the flight attendant’s smile, the nurse patting your arm as the needle goes in. Affective labor is another term for what sociologist Arlie Russell Hochschild calls “emotional labor,” the commercialization of feelings that smooth our social interactions on a daily basis. What if we could integrate our understand of these two faces of immaterial labor through the image of yet another face?

Emoji as Historical Artifacts

The smiley face is now so endemic to American culture that it’s easy to forget it is an invented artifact. The 1963 merger of the State Mutual Life Assurance Company of Worcester, Mass., and Ohio’s Guarantee Mutual Company would be unremembered were it not for one thing: :), or something very much like it. An advertising man named Harvey Ball doodled a smiling yellow face at the behest of State Mutual’s management, who were in need of an internal PR campaign to improve morale after the turmoil and job losses prompted by the merger. The higher-ups loved it. “The power of a smile is unlimited,” proclaimed The Mutualite, the company’s internal magazine, “a smile is contagious…vital to business associations and to society.” Employees were encouraged to smile while talking to clients on the phone and filling out insurance forms. Ball was paid $240 for the campaign, including $45 for the rights to his smiley-face image.

Gradually, the smiley became a pop-culture icon, distributed on buttons and T-shirts, beloved of acid-house record producers. Its first recognized digital instantiation came via Carnegie Mellon’s Scott E. Fahlman, who typed :-) on a university bulletin board in 1982 in the midst of talking about something else entirely.

Nabokov, Fahlman remembered, had called for such a symbol in an interview with the New York Times back in 1969:

Q: How do you rank yourself among writers (living) and of the immediate past?

Nabokov: I often think there should exist a special typographical sign for a smile — some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question.

But it took 15 years after Fahlman’s innovation for emoji to appear — and they went big in Japan. Shigetaka Kurita, a designer for Japanese telecom carrier NTT Docomo, was instructed to create contextual icons for the company as a way to define its brand and secure customer loyalty. He devised a character set intended to bring new emotional clarity to text messages. Without emoji, Kurita observed, “you don’t know what’s in the writer’s head.” When Apple introduced the iPhone to Japan in 2008, users demanded a way to use emoji on the new platform. So emoji were incorporated into Unicode, the computer industry’s standard for characters administered by the Unicode Consortium. At that moment, emoji became interoperable on devices around the world, and Ball’s smiley face had been reified at the level of code.

Emoji as Technics

By some accounts, there are now more than 880 extent emoji that have been accepted by the Consortium and consolidated in Unicode. Control over emoji has become highly centralized, yet they make up a language with considerable creative potential.

With only 144 pixels each, emoji must compress a face or object into the most schematic configuration possible. Emoji, like other skeuomorphs — linoleum that looks like wood grain, the trash bin on your desktop, the shutter click sound on a digital camera — are what anthropologist Nicholas Gessler calls “material metaphors” that “help us map the new onto an existing cognitive structure.” That skeumorphism allows for particular types of inventiveness and irony. So the emojiScreen Shot 2014-08-14 at 8.02.52 PM might act as a pictogram (“I stepped in a pile ofScreen Shot 2014-08-14 at 8.02.52 PM, an ideogram (“that movie was Screen Shot 2014-08-14 at 8.02.52 PM, an emoticon (“I feel Screen Shot 2014-08-14 at 8.02.52 PM, or a phatic expression (“I’m tired.” “Screen Shot 2014-08-14 at 8.02.52 PM”). That’s some powerful contextual Screen Shot 2014-08-14 at 8.02.52 PM.

Yet this flexibility has a broader business purpose, one that goes hand-in-hand with the symbols’ commercial roots: emoji have been proprietary whenever it was possible for companies to do so. NTT Docomo was unable to secure copyright on its original character set, and competitors J-Phone and DDI Cellular Group soon produced rival emoji character sets, which were made available exclusively on their competing software platforms. Emoji were a practical and experiential icon of brand difference; their daily use drove the uptake of a particular platform, and by extension helped establish particular technical standards across the industry. But the popularity of emoji meant they were hard to contain: user complaints about the illegibility of a competitor’s emoji on their phones meant the telcos had to give up on making money off emoji directly. It was the necessity born of linguistic practice over time that prompted these grudging steps towards a technical and business consensus.

Hardt argues that affect is perennially more powerful than the forces attempting to harness it, and it would be tempting to think of emoji in this context. But emoji remain a restricted, top-down language, controlled by the Unicode Consortium and the technical platforms that display them. Media theorist Laura Marks uses the term lame infinity to describe the phenomenon where digital technology seems infinite but is used to produce a dispiriting kind of sameness. Emoji, as “a perfectly normcore system of emotion: a taxonomy of feeling in a grid menu of ideograms” fit that description. While emoji offer creative expression within their own terms, they also may confine us to a type of communicative monoculture. What’s more, emoji also hold out the promise of emotional standardization in the service of data analysis: If a feeling can be summed up in a symbol, then theoretically that feeling can be more easily tracked, categorized, and counted.

Emoji as Data Culture

We love emoji, and emoji depict our love, while also transforming our states of feeling into new forms of big data. Many platforms and media companies are extracting and analyzing emoji as a new source of insight into their customers’ emotions and desires. In the spring of 2013, Facebook introduced the ability to choose from a variety of emoji-like moods as part of a status update. Users can register that they feel happy, sad, frustrated, or a variety of other emotions. And with the recent uproar over the Facebook emotional-contagion study, it’s increasingly clear that quantifying, tracking and manipulating emotion is an important part of the company’s business model. “By selecting your current activity instead of merely writing it out, you structure data for Facebook,” TechCrunch observed when the feature was rolled out. And sentiment-analysis firms like Lexalytics are working to incorporate emoji into their business models.

In many ways, emoji offer us a deeply restricted world. This character set is valorized for its creative uses — such as Emoji Dick, Fred Benenson’s crowdsourced, book-length rewriting of Melville’s Moby Dick as emoji, which was accepted into theLibrary of Congress. But it is also constrained at the level of social and political possibility. Emoji are terrible at depicting diversity: on Apple’s iOS platform, for example, there are many white faces, but only two seem Asian and none are black. Responding to public outcry, Apple now says it is “working closely with the Unicode Consortium in an effort to update the standard.”

Emoji raise the question: What habits of daily life do emoji promote, from the painted nails to the martini glasses? What behavior do they normalize? By giving us a visual vocabulary of the digital everyday, emoji offer an example of what Foucault termed “anatamo-politics”: the process by which “the production of collective subjectivities, sociality, and society itself” is worked through at the level of individual practices and habits. And in a broad sense, what emoji are trying to sell us, if not happiness, is a kind of quiescence. In Katy Perry’s “Roar” video from 2013, for example, we see emoji transliterations of the song’s lyrics. But is also an eerily stark commentary on the basic anatamo-political maintenance of daily life – sleeping, eating, bathing, grooming, charging our devices. The habitual maintenance depicted in the video goes hand in hand with the “basic” emoji character set.

In a similar vein, the unofficial music video for Beyoncé’s “Drunk in Love” has brilliant, quick-fire emoji translation using characters from Apple’s proprietary font in front of a plain white background. The genius of the emoji “Drunk in Love” lies in how it perfectly conjures Beyoncé’s celebrity persona, and the song’s sexualized glamour, out of the emoji character set. Emoji can represent cocktails, paparazzo attacks, and other trappings of Western consumer and celebrity culture with ease. More complicated matters? There’s no emoji for that.

Emoji as Soft Control

“This face is a symbol of capitalism,” declared Murray Spain to the BBC. Spain was one of the entrepreneurs who, in the early 1970s, placed a copyright on the smiley face with the phrase “Have a nice day.” “Our intent was a capitalistic intent…our only desire was to make a buck.” The historical line connecting the smiley face to emoji is crooked but revealing, featuring as it does this same sentiment repeated again and again: the road to the bottom line runs through the instrumentalization and commodification of emotion.

Now with many Silicon Valley technology corporations adding Chief Happiness Officers, the impulse to obey the smiley has become supercharged. Emoji, like the original smiley, can be a form of “cruel optimism,” which affect theorist Lauren Berlant defines as “when the object/scene that ignites a sense of possibility actually makes it impossible to attain.” Emoji help us cope emotionally with the technological platforms and economic systems operating far outside of our control, but their creative potential is ultimately closed off. They are controlled from the top down, from the standards bodies to the hard-coded limits on what your phone will read.

Emoji offer us a means of communicating that we didn’t have before: they humanize the platforms we inhabit. As such, they are a rear-guard action to enable sociality in digital networks, yet are also agents in turning emotions into economic value. As a blip in the continuing evolution of platform languages, emoji may be remembered as ultimately conservative: digital companions whose bright colors and white faces had nothing much to say about our political impasses.

Link: Out of Sight

The Internet delivered on its promise of community for blind people, but accessibility is easy to overlook.

I have been blind since birth. I’m old enough to have completed my early schooling at a time when going to a special school for blind kids was the norm. In New Zealand, where I live, there is only one school for the blind. It was common for children to leave their families when they were five, to spend the majority of the year far from home in a school hostel. Many family relationships were strained as a result. Being exposed to older kids and adults with the same disability as you, however, can supply you with exemplars. It allows the blind to see other blind people being successful in a wide range of careers, raising families and being accepted in their local community. A focal point, such as a school for the blind, helps foster that kind of mentoring.

The Internet has expanded the practical meaning of the word community. New technology platforms aren’t often designed to be accessible to people unlike the designers themselves, but that doesn’t mean they aren’t used by everyone who can. For blind people, the Internet has allowed an international community to flourish where there wasn’t much of one before, allowing people with shared experiences, interests, and challenges to forge a communion. Just as important, it has allowed blind people to participate in society in ways that have often otherwise been foreclosed by prejudice. Twitter has been at the heart of this, helping bring blind people from many countries and all walks of life together. It represents one of the most empowering aspects of the Internet for people with disabilities — its fundamentally textual nature and robust API supporting an ecosystem of innovative accessible apps has made it an equalizer. Behind the keyboard, no one need know you’re blind or have any other disability, unless you choose to let them know.

With the mainstreaming of blind kids now the norm, real-world networking opportunities are less frequent. That’s why the Internet has become such an important tool in the “blind community.” While there’s never been a better time in history to be blind, the best could be yet to come — provided the new shape the Internet takes remains accessible to everyone. In terms of being able to live a quality, independent life without sight, the Internet has been the most dramatic change in the lives of blind people since the invention of Braille. I can still remember having to go into a bank to ask the teller to read my bank balances to me, cringing as she read them in a very loud, slow voice (since clearly a blind person needs to be spoken to slowly).

Because of how scattered the blind community is and how much desire there is for us to share information and experiences, tech-savvy blind people were early Internet adopters. In the 1980s, as a kid with a 2400-baud modem, I’d make expensive international calls from New Zealand to a bulletin-board system in Pittsburgh that had been established specifically to bring blind people together. My hankering for information, inspiration, and fellowship meant that even as a cash-strapped student, I felt the price of the calls was worth paying.

Blind people from around the world have access to many technologies that get us online. Windows screen readers speak what’s on the screen, and optionally make the same information available tactually via a Braille display. Just as some sighted people consider themselves “visual learners,” so some blind people retain information better when it’s under their fingertips. Yes, contrary to popular belief, Braille is alive and well, having enjoyed a renaissance thanks to refreshable Braille display technology and products like commercial eBooks.

Outside the Windows environment, Apple is the exemplary player. Every Mac and iOS device includes a powerful screen reader called VoiceOver. Before Apple added VoiceOver to the iPhone 3GS in 2009, those of us who are blind saw the emergence of touch screens as a real threat to our hard-won gains. We’d pick up an iPhone, and as far as we were concerned, it was a useless piece of glass. Apple came up with a paradigm that made touch screens useable by the blind, and it was a game changer. Android has a similar product which, we hope, will continue to mature.

All this assistive technology means that the technological life I lead isn’t much different from that of a sighted person. I’m sitting at my desk in my office, writing this article in Microsoft Word. Because I lack the discipline to put my iPhone on “Do Not Disturb”, the iPhone is chiming at me from time to time, and I lean over to check the notification. Like other blind people, I use the Internet to further my personal and professional interests that have nothing to do with blindness.

But social trends haven’t kept up with technological ones. It’s estimated that in the United States, around 70 percent of working-aged blind people are unemployed. And the biggest barrier posed by blindness is not lack of sight – it’s other people’s ignorance. Since sight is such a dominant sense, a lot of potential employers close their eyes and think, “I couldn’t do this job if I couldn’t see, so she surely can’t either”. They forget that blindness is our normality. Deprive yourself of such a significant source of information by putting on a blindfold, and of course you’re going to be disorientated. But that’s not the reality we experience. It’s perfectly possible to function well without sight.

Just as there are societal barriers, we’ve yet to reach an accessible tech utopia – far from it. Blind people are inhibited in our full participation in society because not all online technologies are accessible to screen reading software. Most of this problem is due to poor design, some of it due to the choices made by content creators. Many blind people enjoy using Twitter, because text messages of 140 characters are at its core. If you tell me in a tweet what a delicious dinner you’ve had, I can read that and be envious. If you simply take a picture of your dinner and don’t include any text in the tweet, I’m out of the loop. Some blind people were concerned when reporters appeared to have caught a new feature that to allowed full tweets to be embedded in other tweets as an image, which would have meant the conversations which thrived on this platform would be out of reach for our screen readers. Twitter, to its credit, has reached out to us and made clear this was not the case. But even though it turned out to be a false alarm, the Twitter episode brought home to many of us just how fragile accessibility really is.

My voice is sometimes not heard on popular mainstream sites, due to a technology designed to thwart spam bots. Many fully-sighted people complain about CAPTCHA, the hard-to-read characters one sometimes needs to type into a form before you can submit it. Since these characters are graphical, they can stop a blind person in their tracks. Plug-ins can assist in many cases, and sometimes an audio challenge is offered. But the audio doesn’t help people who are deaf as well as blind. It’s encouraging to see an increasing number of sites trying mathematical or simple word puzzles to keep the spammers out, but allow disabled people in.

Many in the media seem wary of “post-text Internet,” a term popularized by economics blogger Felix Salmon in a post explaining why he was joining a television station, Fusion. “Text has had an amazing run, online, not least because it’s easy and cheap to produce,” he wrote. But for digital storytelling, “the possibilities are much, much greater.” Animation, videos, and images appeal to him as an arsenal of tools for a more “immersive” experience. If writers feel threatened by this new paradigm, he suggests, it’s because they’re unwilling to experiment with new models. But for blind people, the threat could be much more grave.

Some mobile apps and websites, despite offering information of interest, are inaccessible. Usually this is because links and buttons containing images don’t offer alternative textual labels. This is where the worry about about being shut out of a “post-text” internet feels most acute. While adding text is an easy way to ensure access to everyone, a wholesale shift in the Internet’s orientation from text to image would further enable designers’ often lax commitment to accessibility.I feel good about how the fusion of mainstream and assistive technologies has facilitated inclusion, but the pace of technological change is frenetic. Hard-won gains are easily lost. It’s therefore essential that we as a society come down on the side of technologies that allow access for all.

While we must be vigilant, there is cause to be optimistic. Blindness often begins to hit teenagers hard at the time their sighted peers are starting to drive. Certainly, not being able to get into a car and drive is a major annoyance of blindness. As a dad to four kids, it requires me to plan our outings a lot more carefully, because of the need to rely on public transport. Self-driving car technology has the potential to change the lives of blind people radically.While concerns persist about Google’s less than stellar track-record on accessibility, products like Google Glass could potentially be used to provide feedback based on a combination of object/face recognition and crowd-sourcing that could help us navigate unfamiliar surroundings more efficiently. Add to that the ability to fully control currently inaccessible, touch-screen-based appliances, and the “Internet of things” has potential for mitigating the impact of blindness – provided we as a society choose to proceed inclusively.

Not only has the Internet expanded the concept of “community”, it has redefined the ways in which traditional communities engage with one another. I don’t need to go to the supermarket and ask for a shelf-packer to help me shop, I can investigate the overwhelming number of choices of just about any product, and take my pick, totally independently. When I interact with any person or business online, they need not know I’m blind, unless I choose to tell them. To disclose or not to disclose is my choice, in any situation. That’s liberating and empowering.

But to fulfill all the promise of the Internet, we must be sure that just as someone in a wheelchair can negotiate a curb cut, open a door or use an elevator, so we must make sure the life-changing power of the Internet is available to us all – whether we see it, hear it, or touch it.

Link: What I've Learned as an Internet Drug Dealer

Many fans of Bitcoin would like to distance the technology from the reputation it has gained as the currency of choice for drug dealers and criminals on the internet. But to ignore the cryptocurrency’s use in illicit markets is to miss a vital part of what has made it successful. Food trucks and floundering satellite television companies accepting digital cash is nice, but if you want to see where the real action is in this new economy, you need to enter the deep web.

So one afternoon, I downloaded the Tor Browser, checked out /r/darknetmarkets for site recommendations, found one I liked, and took the dive.

Of course, I had heard about these sites for a long time, but it was still shocking to see what was on offer on my computer screen: row after row of listings for heroin, meth, MDMA, weed, coke, and any other drug you could want. It was Amazon for drugs, all priced in Bitcoin, all available for convenient vacuum sealed delivery to the mailing address of your choice.

Much like other e-commerce sites, every vendor on the site had a username and rating to help customers know which of these strangers selling potent narcotics over the internet had a track record of trustworthiness. One name in particular stood out, a vendor who had hundreds of completed sales with a nearly flawless feedback rating.

I sent a simple message identifying myself as a journalist and asking if he or she would be interested in an interview. To my surprise, a quick response appeared in my inbox: he or she would be happy to chat. The only stipulations were that we use PGP encrypted messaging and that I did not include his or her actual username in my article. The dealer chose the handle “RainDuck” for our interview.

Over the next week we exchanged messages about being a vendor on the dark web site Evolution, integrity on the black market, what it’s like to run a business that’s dependent on the Bitcoin network, and the war on drugs—at a time when that war is shifting to the web.

It’s not easy to estimate the amount of drugs sold online, but estimates by the United Nations and others say the market is multiplying in size. To stop the internet trade, the UN says that postal inspectors, customs agents, and “other agencies” are “vital to ensure that points in the supply chain could be more effectively cut off and make it more difficult for buyers to obtain products.”

A number of questions to RainDuck went nowhere: when I asked for RainDuck’s age, he apologized.I’m sorry but I can’t give even an approximate answer to that question. I’m old enough that I can do this safely, but not old enough to die of natural causes. That’s the best answer I can give, somewhere between 25 and 90.”

Motherboard: Why did you become a vendor?
RainDuck: I became a vendor after quite a bit of experience starting as a buyer. When I discovered the darknet markets, I saw an opportunity to avoid the shadyness that comes with buying drugs from a friend of a friend of that one guy that I met at a bar. I could buy drugs from someone after reading dozens of reviews on their service and product, and feel confident that I was getting what I was paying for.

Unfortunately vendors online can rip people off just as drug dealers in person can. There is a degree of safety, but some vendors follow a pattern of providing legitimate service for a short period of time before ripping a bunch of people off and running away with the money.

I saw an opportunity to provide a legitimate service to my customers. I became a vendor and made it a point to prove that I am honest and trustworthy. I made a name for myself and became known as the type of person who you could trust. I’ve had many opportunities to rip people off without repercussions, but I’ve never once scammed someone. Reputation is everything on the darknet markets, and establishing myself as a trustworthy individual has been far more profitable for me than being a con artist. To summarize, I saw an opportunity to provide a degree of service that is uncommon in the world of drugs, and decided to fill that void.

Were you involved in this industry before your current account? And if so, how long have you been in the business?
I have indeed been involved prior to my current account. Unfortunately I can’t go into specifics. Staying anonymous is the most important factor to any vendor who values his/her freedom, and being in the spotlight is not always a good thing. When someone has too much attention drawn to them it’s sometimes best to step back and lay low for awhile, and that applies to the internet just as much as to drug dealers in real life.

You are currently using a centralized marketplace. What are your thoughts on decentralized marketplaces (i.e. the Dark Market project) and what they mean for the future of online commerce?
Good question. To those who don’t know, centralized marketplaces hold all of the money for you. Tens of thousands of buyers and vendors will trust the marketplace to hold their money in escrow, and they release the funds from the buyer to the seller when both users confirm that the transaction has been complete.

The downside to this model is that the amount of money the marketplaces hold at a time can reach hundreds of millions of dollars, and it’s held by someone who has the opportunity to run away with the money at any time. In the last year there have been several marketplaces that have run away with a total of well over a billion dollars. Many people’s lives have been ruined by money loss, and the community as a whole is very distrustful of this business model after several recent scams.

Decentralized marketplaces limit their own power, and rather than keeping the money in their own account, they essentially hold “keys” to the accounts that the money is held in. Two people must use their keys in order to unlock the funds from escrow, whether those two people are the buyer and vendor, or the buyer and the marketplace, or the vendor and the marketplace. This allows the marketplace the ability to resolve issues without giving them the freedom to run off with large sums of money.

The downside to this model is that from a technical side it can be very hard to use. So far most of the decentralized marketplaces require some degree of programming knowledge, or external software, or otherwise are too complicated for the average user. For that reason most of the decentralized marketplaces attract less traffic. In the long run, I believe that we will move almost entirely to using decentralized markets, but it may be another year or two before the sites are streamlined to allow both the buyers and the vendors to use this kind of marketplace easily.

What is your average revenue and profit in a month?
Unfortunately that’s not a question I feel comfortable answering. I can say that there is a very large amount of money that can be made in this industry,and I make more than enough.

Did you have prior business experience before coming to this industry?
I did. Most people don’t consider selling drugs to be a business, but the successful vendors treat it just like any other business. It’s important to have good time management skills, accounting skills, as well as customer service skills. Being a vendor online is just like owning or managing a business—the only difference being that the government decided that what we do is illegal.

What do you think is the biggest misconception about dark net markets?
I think there are two equally large but opposing misconceptions. Some people believe that using darknet markets as a buyer is extremely dangerous and they don’t feel comfortable doing so because they think if a few grams of weed is sent to them that they will undoubtedly go to jail. Others are too confident in the safety of the markets, and they will openly talk about sensitive information that could easily lead to their arrest.

The reality is somewhere in the middle. For security reasons most marketplaces require buyers to encrypt their addresses using special software that only allows a specific person to read it. However there are a shocking number of people who don’t encrypt sensitive info and openly admit online to crimes that could easily lead to their arrest if the information were in the wrong hands.

At the same time, law enforcement for the most part is after the large-scale buyers and vendors who are moving large amounts of product. Although if given the opportunity they may try to arrest someone buying a small amount of weed, the truth is that the level of caution needed for someone interested in buying a quarter ounce of weed is completely different than the amount of paranoia and protection needed for someone buying thousands of dollars of product on a regular basis.

Use common sense and protect yourself, but realize that there are a plethora of people who use these sites on a regular basis, and 99.99% of the people who do so will never encounter any problems doing so. The system is designed to be relatively safe for the buyers, and in most cases you’re more likely to go to jail for buying drugs in real life than online.

Do you have any qualms about the fact that you may be supporting the problems of drug addicts?
Initially yes, though after becoming more involved in this community I look at things differently. Even among the “hard drugs” such as meth and heroin, many of the people who do it are not bad people, and not all of them are addicted. Most people only see the stereotypes. The truth is that while there are people who have used drugs and became addicted to them and had their lives ruined, a surprising number of the people who use drugs regularly you would never know. I regularly get messages from people who confide in me that although they are a successful businessperson, there’s not a single person who knows about their drug use because it’s not socially acceptable.

Prohibition has never worked. It didn’t work with alcohol and it doesn’t work with drugs. People should make their own choices. I’m not here to judge people for what they do, I just want to make sure that if they make that choice, they get it safely, at a fair price, and that they know what’s in it rather than buying cocaine from some random guy that turns out to be laundry detergent. There’s no doubt that drugs can be dangerous, but sometimes the lengths that people are forced to go through to get their drugs are more dangerous than the drugs themselves.

Do you use your own products?
I do occasionally, though I don’t use all of the drugs that I sell. I don’t mix business with pleasure, and I don’t have the time to do so often even if I wanted to. My use of my products is mainly limited to testing them and making sure they are safe before sending them off to my customers.

Drug legalization is slowly gaining traction among policy experts. Do you think, twenty years from now, this industry won’t be relegated to the lesser traveled corners of the internet?
Yes and no. There is increasing pressure to legalize drugs, but unfortunately most of that focus is strictly on marijuana, and that’s at the state level more than the federal level. It’s impossible to say what will happen 20 years from now, but there are too many people who profit off of the fact that drugs are illegal. Prisons, police officers, tobacco companies, and alcohol companies all would lose an unbelievable amount of funding if drugs were legalized.

It’s sad to think that the majority of people in jail right now are there for possession or sale of small amounts of drugs, but unfortunately it’s a cat and mouse game that take an insane amount of money from taxpayers and puts it in the pockets of the corporations that stand to benefit from the way the laws are structured now.

Only time will tell what will happen, but I doubt that 5, or 10, or even 20 years from now people will be openly doing cocaine, meth, shrooms, heroin, or acid, though I do think that weed has a much larger chance of being completely legalized due to the fact that it’s more socially acceptable.

You say that there are “too many people who profit off of the fact that drugs are illegal” for anyone to expect widespread legalization anytime soon. Do you think that’s the central motivation for the “War on Drugs”? Or is it more about protecting the public?
I think that point of view is very accurate. A lot of people seem to look at it as a conspiracy, but I don’t necessarily think that’s the case. Rather, it’s the people who profit off prohibition who spend very large sums of money to lobby politicians in Washington. It would be naive to think that’s not the case. Polls shows that the public is largely in favor of legalization (of certain drugs at least) and yet no one at the political level seems to be in any rush to make things happen.

At the very least, drug use should be legalized. If they want to keep throwing us dealers in jail that’s one thing, but the fact that 98% of drug related arrests involve simple possession is ridiculous, and it’s not okay that millions of people’s lives are being ruined when most of them simply were caught in the wrong place at the wrong time, and for doing something that would be completely okay if they lived in certain states (Such as California and Colorado).

This answer refers mainly to marijuana of course, but again the point remains. There are almost as many people who smoke marijuana as there are who drink alcohol, and weed kills significantly less people. If we are still putting people in jail for possession of a much safer substance than alcohol, I wouldn’t count on heroin being legalized anytime soon.

You have to worry about law enforcement in your business. Have there been any close calls?
I can say that if I had what I would consider to be a close call I would get out of the business completely, but that doesn’t mean I feel completely safe either. In this business it’s always better to be too paranoid than not paranoid enough.

How do you deal with, what I would imagine to be, the constant stress from this paranoia?
Unfortunately I haven’t figured that part out yet. My business keeps me busy most of the time, and unlike traditional jobs I don’t get vacation days or time off. I’m too busy to focus on the constant stress I endure, and although that may sound pessimistic to some degree, I’m overall very happy with my life. I love what I do and knowing I’m providing a service that not many people can offer.

Mainly it’s the people who purchase drugs not recreationally but for medicinal purposes that makes it all worth it. I regularly have people confide in me that my products are the only thing that has relieved their pain, and many of my customers are old enough that they don’t have the ability to buy from a friend of a friend. Knowing that I’m helping to safely provide medication for someone who otherwise wouldn’t have the ability to get it is more rewarding than anything else. That said, the money isn’t bad either.

Would you consider vending online to be a safer option than vending in person?
For the vendors, dealing in person would be safer. For the buyers, the reverse is true. Law enforcement mainly targets the vendors, and most buyers have nothing to worry about unless they are ordering very large amounts of product. There have been reports of buyers being questioned after failing to take the proper steps to protect themselves, but buying online is generally much safer than buying in person for most circumstances.

If dealing in person is safer, then why do you choose to vend online?
Overall dealing in person is safer, but it depends what you are selling, who you are selling to, and how much. If you know what you’re doing, vending online has the potential to be safer than dealing in person, but the risk lies not in what you are doing, but the mistakes you make. For a vendor, all it takes is one message that is accidently left unencrypted, one fingerprint left on the inside of a package, or one strand of hair that could potentially lead to their arrest if in the wrong hands.

A vendor that knows what they are doing can be perfectly safe, but unfortunately there’s no college course for being an internet drug dealer. The only way to learn is to try it, but unfortunately this is one industry where making mistakes while learning is not okay. Essentially I choose to vend online because I feel I have the knowledge and ability to do so safely. For the majority of people who take the same path however, they are playing Russian roulette and they will either make very little money, quit shortly after, or law enforcement will just wait for them to make a mistake.

Do you run a solo operation or are there employees?
Sorry but I can’t answer that.

How does Bitcoin’s volatility affect your business?
When business is good bitcoin volatility isn’t an issue, but when business is slow, a drop in the value of bitcoin can be the difference between making a profit and breaking even or even losing money.

Bitcoin can definitely play a huge role in the amount of income vendors make, especially for newer vendors. Vendors that have not earned trust in the community are almost always required by the marketplaces to use their escrow system. On average it can take about a week between the time the package is sent and the time the money is released from escrow, but in some cases if there are problems with an order it can take 3 weeks or more.

In addition, the vendors have to find a way to safely and anonymously convert bitcoin into actual currency, which can take even longer. Considering that bitcoin can stay around the same rate for weeks and then suddenly increase or decrease by hundreds of dollars in a matter of days, newer vendors may find themselves gambling with their profits.

Many vendors who are more established can get away with requiring the funds to be released from escrow before sending packages. Even then it can be several days from the point the order is placed to where the vendor has the money physically in their possession, but it tends to average out over time. If a vendor does a lot of business consistently over time, they can accept short term losses from drops in bitcoin, knowing that at some point they will make more money if bitcoin goes up in the future.

How do you cash out your bitcoins?
Again, answering that question would be a security violation. I take very great lengths to make sure that I cash out my bitcoins safely, but elaborating on exactly what I do is not something I’m willing to share.

Among people in the “real world” of drug dealing, is Bitcoin gaining a name for itself?
Not at all. The majority of “real world” drug dealers have no idea what Bitcoin is or even that this community exists. Of those who do, they certainly won’t share that information with others. There is a huge opportunity for drug dealers to make a very large amount of money reselling the right products, but no one wants anyone else to know that. There are plenty of people who purchase from the darknet markets and resell the product wholesale, taking advantage of the cheap prices of Chinese-made drugs specifically, but it’s a very small percentage of dealers who do so.

Darkmarket sellers were some of the first in the world to rely heavily on the Bitcoin network for their trade. Considering this wealth of experience, do you think Bitcoin has a legitimate chance at becoming a widely used method of payment for transactions beyond the black market? Or do you think of it as mainly useful for what you do today and nothing else?
I think Bitcoin has the potential to become a widely used method of payment in general. Unfortunately until recently it’s been very unstable, and few legitimate businesses want to take the chance of accepting a payment method that may be worth 20 percent less a few days from now. Most businesses that accept bitcoin are owned by people who believe in the long-term potential of bitcoin, but currently the number of businesses who do so are few and far between. That said, there are a small number of very large businesses who have recently stated their intentions to accept bitcoin, and I believe that will encourage other smaller businesses to do the same.

Right now the bitcoin community is divided mainly among those who use it as an investment and those who use it for illicit purposes. Fortunately it seems that recently there are efforts by bitcoin investors to use it for more legitimate purposes, and we are seeing an exponential number of businesses offering services such as hotel rooms and flight, as well as retailers offering electronics, furniture, and other commodities. It’s too early to tell exactly how this will play out, but there are enough people who believe strongly in the long term future of bitcoin that I truly believe we will see much more widespread use of it for legitimate purposes. Bitcoin certainly isn’t going away anytime soon.

Do you plan on being in this business for a long time?
I do. I’ve done quite a bit in my life, but nothing has been as satisfying as being a vendor. It’s stressful, dangerous, and time consuming, but the rewards are great.

Link: The Lights Are On but Nobody’s Home

Who needs the Internet of Things? Not you, but corporations who want to imprison you in their technological ecosystem

Prepare yourself. The Internet of Things is coming, whether we like it or not apparently. Though if the news coverage — the press releases repurposed as service journalism, the breathless tech-blog posts — is to be believed, it’s what we’ve always wanted, even if we didn’t know it. Smart devices, sensors, cameras, and Internet connectivity will be everywhere, seamlessly and invisibly integrated into our lives, and it will make society more harmonious through the gain of a million small efficiencies. In this vision, the smart city isn’t plagued by deteriorating infrastructure and underfunded social services but is instead augmented with a dizzying collection of systems that ensure that nothing goes wrong. Resources will be apportioned automatically, mechanics and repair people summoned by the system’s own command. We will return to what Lewis Mumford described as a central feature of the Industrial Revolution: “the transfer of order from God to the Machine.” Now, however, the machines will be thinking for themselves, setting society’s order based on the false objectivity of computation.

According to one industry survey, 73 percent of Americans have not heard of the Internet of Things. Another consultancy forecasts $7.1 trillion in annual sales by the end of the decade. Both might be true, yet the reality is that this surveillance-rich environment will continue to be built up around us. Enterprise and government contracts have floated the industry to this point: To encourage us to buy in, sensor-laden devices will be subsidized, just as smartphones have been for years, since companies can make up the cost difference in data collection.

With the Internet of Things, promises of savings and technological empowerment are being implemented as forces of social control. In Chicago, this year’s host city for Cisco’s Internet of Things World Forum, Mayor Rahm Emanuel has used Department of Homeland Security grants to expand Chicago’s surveillance-camera system into the largest in the country, while the city’s police department, drawing on an extensive database of personal information about residents, has created a “heat list” of 400 people to be tracked for potential involvement in violent crime. In Las Vegas, new streetlights can alert surrounding people to disasters; they also have the ability to record video and audio of the surrounding area and track movements. Sometime this year, Raytheon plans to launch two aerostats — tethered surveillance blimps — over Washington, D.C. In typical fashion, this technology, pioneered in the battlefields of Afghanistan and Iraq, is being introduced to address a non-problem: the threat of enemy missiles launched at our capital. When they are not on the lookout for incoming munitions, the aerostats and their military handlers will be able to enjoy video coverage of the entire metropolitan area.

The ideological premise of the Internet of Things is that surveillance and data production equal a kind of preparedness. Any problem might be solved or pre-empted with the proper calculations, so it is prudent to digitize and monitor everything.

This goes especially for ourselves. The IoT promises users an unending capability to parse personal information, making each of us a statistician of the self, taking pleasure and finding reassurance in constant data triage. As with the quantified self movement, the technical ability for devices to collect and transmit data — what makes them “smart” — is its own achievement, the accumulation of data is represented as its own reward. “In a decade, every piece of apparel you buy will have some sort of biofeedback sensors built in it,” the co-founder of OMsignal told Nick Bilton, a New York Times technology columnist. Bilton notes that “many challenges must be overcome first, not the least of which is price.” But convincing people they need a shirt that can record their heart rate is apparently not one of these challenges.

Vessyl, a $199 drinking cup Valleywag’s Sam Biddle mockingly (and accurately) calls “a 13-ounce, Bluetooth-enabled, smartphone-syncing, battery-powered supercup,” analyzes the contents of whatever you put in it and tracks your hydration, calories, and the like in an app. There is not much reason to use Vessyl, beyond a fetish of the act of measurement. Few people see such a knowledge deficit about what they are drinking that they feel they should carry an expensive cup with them at all times. But that has not stopped Vessyl from being written up repeatedly in the press. Wired called Vessyl “a fascinating milestone … a peek into some sort of future.”

But what kind of future? And do we want it? The Internet of Things may require more than the usual dose of high-tech consumerist salesmanship, because so many of these devices are patently unnecessary. The improvements they offer to consumers — where they exist — are incremental, not revolutionary and always come at some cost to autonomy, privacy, or security. Between stories of baby monitors being hacked, unchecked backdoors, and search engines like Shodan, which allows one to crawl through unsecured, Internet-connected devices, from traffic lights to crematoria, it’s bizarre, if not disingenuous, to treat the ascension of the Internet of Things as foreordained progress.

As if anticipating this gap between what we need and what we might be taught to need, industry executives have taken to the IoT with the kind of grandiosity usually reserved for the Singularity. Their rhetoric is similarly eschatological. “Only one percent of things that could have an IP address do have an IP address today,” said Padmasree Warrior, Cisco’s chief technology and strategy officer, “so we like to say that 99 percent of the world is still asleep.” Maintaining the revivalist tone, she proposed, “It’s up to our imaginations to figure out what will happen when the 99 percent wakes up.”

Warrior’s remarks highlight how consequential marketing, advertising, and the swaggering keynotes of executives will be in creating the IoT’s consumer economy. The world will not just be exposed to new technologies; it will be woken up, given the gift of sight, with every conceivable object connected to the network. In the same way, Nest CEO Tony Fadell, commenting on his company’s acquisition by Google, wrote that his goal has always been to create a “conscious home” — “a home that is more thoughtful, intuitive.”

On a more prosaic level, “smart” has been cast as the logical, prudent alternative to dumb. Sure, we don’t need toothbrushes to monitor our precise brushstrokes and offer real-time reports, as the Bluetooth-enabled, Kickstarter-funded toothbrush described in a recent article in The Guardian can. There is no epidemic of tooth decay that could not be helped by wider access to dental care, better diet and hygiene, and regular flossing. But these solutions are so obvious, so low-tech and quotidian, as to be practically banal. They don’t allow for the advent of an entirely new product class or industry. They don’t shimmer with the dubious promise of better living through data. They don’t allow one to “transform otherwise boring dental hygiene activities into a competitive family game.” The presumption that 90 seconds of hygiene needs competition to become interesting and worth doing is among the more pure distillations of contemporary capitalism. Internet of Things devices, and the software associated with them, are frequently gamified, which is to say that they draw us into performances of productivity that enrich someone else.

In advertising from AT&T and others, the new image of the responsible homeowner is an informationally aware one. His house is always accessible and transparent to him (and to the corporations, backed by law enforcement, providing these services). The smart home, in turn, has its own particular hierarchy, in which the manager of the home’s smart surveillance system exercises dominance over children, spouses, domestic workers, and others who don’t have control of these tools and don’t know when they are being watched. This is being pushed despite the fact that violent crime has been declining in the United States for years, and those who do suffer most from crime — the poor — aren’t offered many options in the Internet of Things marketplace, except to submit to networked CCTV and police data-mining to determine their risk level.

But for gun-averse liberals, ensconced in low-crime neighborhoods, smart-home and digitized home-security platforms allow them to act out their own kind of security theater. Each home becomes a techno-castle, secured by the surveillance net.

The surveillance-laden house may rob children of essential opportunities for privacy and personal development. One AT&T video, for instance, shows a middle-aged father woken up in bed by an alert from his security system. He grabs his tablet computer and, sotto voce, tells his wife that someone’s outside. But it’s not an intruder, he says wryly. The camera cuts to shows a teenage girl, on the tail end of a date, talking to a boy outside the home. Will they or won’t they kiss? Suddenly, a garish bloom of light: the father has activated the home’s outdoor lights. The teens realize they are being monitored. Back in the master bedroom, the parents cackle. To be unmonitored is to be free — free to be oneself and to make mistakes. A home ringed with motion-activated lights, sensors, and cameras, all overseen by imperious parents, would allow for little of that.

In the conventional libertarian style, the Internet of Things offloads responsibilities to individuals, claiming to empower them with data, while neglecting to address collective, social issues. And meanwhile, corporations benefit from the increased knowledge of consumers’ habits, proclivities, and needs, even learning information that device owners don’t know themselves.

Tech industry doyen Tim O’Reilly has predicted that “insurance is going to be the native business model for the Internet of Things.” To enact this business model, companies will use networked devices to pull more data on customers and employees and reward behavior accordingly, as some large corporations, like BP, have already done in partnership with health-care companies. As the number of data sources proliferate, opportunities increase for behavioral management as well as on-the-fly price discrimination.

Through the dispersed system of mass monitoring and feedback, behaviors and cultures become standardized, directed at the algorithmic level. A British insurer called Drive Like a Girl uses in-car telemetry to track drivers’ habits. The company says that its data shows that women drive better and are cheaper to insure, so they deserve to pay lower rates. So far, perhaps, so good. Except that the European Union has instituted regulations stating that insurers can’t offer different rates based on gender, so Drive Like a Girl is using tracking systems to get around that rule, reflecting the fear of many IoT critics that vast data collection may help banks, realtors, stores, and other entities dodge the protections put in place by the Fair Credit Reporting Act, HIPPA, and other regulatory measures.

This insurer also exemplifies how algorithmic biases can become regressive social forces. From its name to its site design to how its telematics technology is implemented, Drive Like a Girl is essentializing what “driving like a girl” means — it’s safe, it’s pink, it’s happy, it’s gendered. It is also, according to this actuarial morality, a form of good citizenship. But what if a bank promised to offer loan terms to help someone “borrow like a white person,” premised on the notion that white people were associated with better loan repayments? We would call it discriminatory and question the underlying data and methodologies and cite histories of oppression and lack of access to banking services. With automated, IoT-driven marketplaces there is no room for taking into account these complex sensitivities.

As the Internet of Things expands, we may witness an uncomfortable feature creep. When the iPhone was introduced, few thought its gyroscopes would be used to track a user’s steps, sleep patterns, or heartbeat. Software upgrades or novel apps can be used to exploit hardware’s hidden capacities, not unlike the way hackers have used vending machines and HVAC systems to gain access to corporate computer networks. To that end, many smart thermostats use “geofencing” or motion sensors to detect when people are at home, which allows the device to adjust the temperature accordingly. A company, particularly a conglomerate like Google with its fingers in many networked pies, could use that information to serve up ads on other screens or nudge users towards desired behaviors. As Jathan Sadowski has pointed out here, the relatively trivial benefit of a fridge alerting you when you’ve run out of a product could be used to encourage you to buy specially advertised items. Will you buy the ice cream for which your freezer is offering a coupon? Or will you consult your health-insurance app and decide that it’s not worth the temporary spike in your premiums?

This combination of interconnectivity and feature creep makes Apple’s decision to introduce platforms for home automation and health-monitoring seem rather cunning. Cupertino is delegating much of the work to third-party device makers and programmers — just as it did with its music and app stores — while retaining control of the infrastructure and the data passing through it. (Transit fees will be assessed accordingly.) The writer and editor Matt Buchanan, lately of The Awl, has pointed out that, in shopping for devices, we are increasingly choosing among competing digital ecosystems in which we want to live. Apple seems to have apprehended this trend, but so have two other large industry groups — the Open Interconnect Consortium and the AllSeen alliance — with each offering its own open standard for connecting many disparate devices. Market competition, then, may be one of the main barriers to fulfilling the prophetic promise of the Internet of Things: to make this ecosystem seamless, intelligent, self-directed, and mostly invisible to those within it. For this vision to come true, you would have to give one company full dominion over the infrastructure of your life.

Whoever prevails in this competition to connect, well, everything, it’s worth remembering that while the smartphone or computer screen serves as an access point, the real work — the constant processing, assessment, and feedback mechanisms allowing insurance rates to be adjusted in real-time — is done in the corporate cloud. That is also where the control lies. To wrest it back, we will need to learn to appreciate the virtues of products that are dumb and disconnected once again.

Link: The Melancholy of Subculture Society

If you crack open some of the mustier books about the Internet—you know the ones I’m talking about, the ones which invoke Roland Barthes and discuss the sexual transgressing of MUDs—one of the few still relevant criticisms is the concern that the Internet by uniting small groups will divide larger ones.

Surfing alone

You may remember this as the Bowling Alone thesis applied to the Internet; it got some traction in the late 1990s. The basic idea is: electronic entertainment devices grows in sophistication and inexpensiveness as the years pass, until by the 1980s and 1990s, they have spread across the globe and have devoured multiple generations of children; these devices are more pernicious than traditional geeky fares inasmuch as they are often best pursued solo. Spending months mastering Super Mario Brosall alone—is a bad way to grow up normal.

And then there were none

The 4 or 5 person Dungeons & Dragons party (with a dungeon master) gives way to the classic arcade with its heated duels and oneupsmanship; the arcade gives way to the flickering console in the bedroom with one playing Final Fantasy VIIalone. The increased graphical realism, the more ergonomic controllers, the introduction of genuinely challenging AI techniques… Trend after trend was rendering a human opponent unnecessary. And gamer after gamer was now playing alone.

Perhaps, the critic says, the rise of the Internet has ameliorated that distressing trend—the trends favored no connectivity at first, but then there was finally enough surplus computing power and bandwidth for massive connectivity to become the order of the day.

It is much more satisfactory and social to play MMORPGs on your PC than single-player RPGS, much more satisfactory to kill human players in Halo matches than alien AIs. The machines finally connect humans to humans, not human to machine. We’re forced to learn some basic social skills, to maintain some connections. We’re no longer retreating into our little cocoons, interacting with no humans.

Welcome to the N.H.K.!

But, the critic continues, things still are not well. We are still alienated from one another. The rise of the connected machines still facilitates withdrawal and isolation. It presents the specter of the hikikomorithe person who ceases to exist in the physical realm as much as possible. It is a Japanese term, of course. They are 5 years further in our future than we are (or perhaps one should say, were). Gibson writes, back in 2001:

The Japanese seem to the rest of us to live several measurable clicks down the time line. The Japanese are the ultimate Early Adopters, and the sort of fiction I write behooves me to pay serious heed to that. If you believe, as I do, that all cultural change is essentially technologically driven, you pay attention to the Japanese. They’ve been doing it for more than a century now, and they really do have a head start on the rest of us, if only in terms of what we used to call future shock (but which is now simply the one constant in all our lives).

Gibson also discusses the Mobile Girl and text messaging; that culture began really showing up in America around 2005—SidekicksTwitter etc. You can do anything with a cellphone: order food, do your job, read & write novels, maintain a lively social life, engage in social status envy (She has a smaller phone, and a larger collection of collectibles on her cellphone strap! OMG!)… Which is just another way of saying You can do anything without seeing people, just by writing digital messages. (And this in a country with one of the most undigitizable writing systems in existence!)

The hikikomori withdraws from all personal contact. The hikikomori does not hang out at the local pub, swilling down the brewskis as everyone cheers on the home team. The hikikomori is not gossiping at the rotary club nor with the Lions or mummers or Veterans or Knights. hikikomoris do none of that. They aren’t working, they aren’t hanging out with friends.

The Paradoxical solitude and omnipotence of the otaku, the new century’s ultimate enthusiast: the glory and terror inherent of the absolute narrowing of personal bandwidth. —William Gibson, Shiny balls of Mud (TATE 2002)

So what are they doing with their 16 waking hours a day?

Opting out

But it’s better for us not to know the kinds of sacrifices the professional-grade athlete has made to get so very good at one particular thing…the actual facts of the sacrifices repel us when we see them: basketball geniuses who cannot read, sprinters who dope themselves, defensive tackles who shoot up with bovine hormones until they collapse or explode. We prefer not to consider closely the shockingly vapid and primitive comments uttered by athletes in postcontest interviews or to consider what impoverishments in one’s mental life would allow people actually to think the way great athletes seem to think. Note the way up close and personal profiles of professional athletes strain so hard to find evidence of a rounded human life—outside interests and activities, values beyond the sport. We ignore what’s obvious, that most of this straining is farce. It’s farce because the realities of top-level athletics today require an early and total commitment to one area of excellence. An ascetic focus. A subsumption of almost all other features of human life to one chosen talent and pursuit. A consent to live in a world that, like a child’s world, is very small…[Tennis player Michael] Joyce is, in other words, a complete man, though in a grotesquely limited way…Already, for Joyce, at twenty-two, it’s too late for anything else; he’s invested too much, is in too deep. I think he’s both lucky and unlucky. He will say he is happy and mean it. Wish him well. — David Foster WallaceThe String Theory (July 1996 Esquire)

They’re not preoccupied with our culture - they’re participating in their own subculture. It’s the natural progression of the otaku. They are fighting on Azeroth, or fiercely pursuing their dojinshi career, or… There are many subcultures linked and united by the Internet, for good and ill. For every charitable or benevolent subculture (eg. free software) there is one of mixed benefits (World of Warcraft), and one outright harmful (ex. fans of eating disorders, child pornography).

The point the critic wants to make is that life is short and a zero-sum game. You lose a third of the day to sleep, another third to making a living, and now you’ve little left. To be really productive, you can’t divide your energies across multiple cultures—you can’t be truly successful in mainstream culture, and at the same time be able to devote enough effort in the field of, say, mechanical models, to be called an Otaking. A straddler takes onto his head the overhead of learning and participating in both, and receives no benefits (he will suffer socially in the esteem of the normals, and will be able to achieve little in his hobby due to lack of time and a desire to not go overboard).

The otaku & hikikomori recognizes this dilemma and he chooses—to reject normal life! He rejects life in the larger culture for his subculture. It’s a simple matter of comparative advantage; it’s easier to be a big fish in a small pond than in a large one.

The bigger screen

Have you ever woken up from a dream that was so much more pleasant than real life that you wish you could fall back to sleep and return to the dream?…For some, World of Warcraft is like a dream they don’t have to wake up from - a world better than the real world because their efforts are actually rewarded –[Half Sigma, Status, masturbation, wasted time, and WoW]

EVE Online is unique in gaming in that we have always played on the same massive server in the same online universe since May 2003 when it first went live. We not only understand the harsh penalties for failure, but also how longevity and persistence is rewarded with success. When you have over 60,000 people on weekends dealing, scheming, and shooting each other it attracts a certain type of gamer. It’s not a quick fix kind of game. We enjoy building things that last, be they virtual spaceships or real life friendships that together translate into massive Empires and enduring legacies. Those of us who play understand that one man really can truly make a difference in our world. –Mark Seleene HeardVile Rat eulogy 2012

As ever more opt out, the larger culture is damaged. The culture begins to fragment back into pieces. The disconnect can be profound; an American anime geek has more in common with a Japanese anime geek (who is of a different ethnicity, a different culture, a different religion, a different language…) than he does with an American involved in the evangelical Christian subculture. There is essentially no common ground - our 2 countrymen probably can’t even agree on objective matters like governance or evolution!

With enough of these gaps, where is American or French culture? Such cultural identities take centuries to coalesce—France did not speak French until the 1900s (as The Discovery of France recounts), and Han China is still digesting and assimilating its many minorities and outlying regions. America, of course, had it relatively easy in starting with a small founder population which could just exterminate the natives.

The national identity fragments under the assault of burgeoning subcultures. At last, the critic beholds the natural endpoint of this process: the nation is some lines on a map, some laws you follow. No one particularly cares about it. The geek thinks, Meh: here, Canada, London, Japan, Singapore—as long as FedEx can reach me and there’s a good Internet connection, what’s the difference? (Nor are the technically-inclined alone in this.)

You can test this yourself. Tell yourself—the country I live in now is the best country in the world for people like me; I would be terribly unhappy if I was exiled. If your mental reply goes something like, Why, what’s so special about the USA? It’s not particularly economically or politically free, it’s not the only civilized English-speaking country, it’s not the wealthiest…, then you are headed down the path of opting out.

This is how the paradox works: the Internet breaks the larger culture by letting members flee to smaller subcultures. And the critics think this is bad. They like the broader culture, and deep down, furries and latex fetishists really bother them. They just plain don’t like those weirdo deviants.

But I can get a higher score!

In the future, everyone will be world-famous for 15 minutes.

Let’s look at another angle.

Monoculture

Irony has only emergency use. Carried over time, it is the voice of the trapped who have come to enjoy their cage.

One can’t opt out of culture. There is no view from nowhere. To a great extent, we are our cultural artifacts—our possessions, our complexes of memes, our habits and objects of disgust are all cultural. You are always part of a culture.

Suppose there were only 1 worldwide culture, with no subcultures. The overriding obsession of this culture will be… let’s make it money. People are absolutely obsessed with money—how it is made, acquired, degraded, etc. More importantly, status is defined just by how much you have earned in your life; in practice, tie-breakers include how fast you made it, what circumstances you made it in (everyone admires a person who became a billionaire in a depression more than a good-times billionaire, in the same way we admire the novelist in the freezing garret more than the comfortable academic), and so on.

This isn’t too absurd a scenario: subjects feed on themselves and develop details and complexity as effort is invested in them. Money could well absorb the collective efforts of 7 billion people—already many people act just this way.

But what effect does this have on people? I can tell you: the average person is going to be miserable. If everyone genuinely buys into this culture, then they have to be. Their talents at piano playing, or cooking, or programming, or any form of artistry or scholarly pursuit are denigrated and count for naught. The world has become too big - it did not use to be so big, people so powerless of what is going on:

"Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole.

Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….”

You can’t make a mark on it unless there are almost as many ways to make marks as there are persons.

To put it another way: women suffer enough from comparing themselves to media images. If you want a vision of this future, imagine everyone being an anorexic teenager who hates her body—forever.

We all value social esteem. We need to know somebody thinks well of us. We’re tribal monkeys; ostracism means death.

Jaron Lanier: I’d like to hypothesize one civilizing force, which is the perception of multiple overlapping hierarchies of status. I’ve observed this to be helpful in work dealing with rehabilitating gang members in Oakland. When there are multiple overlapping hierarchies of status there is more of a chance of people not fighting their superior within the status chain. And the more severe the imposition of the single hierarchy in people’s lives, the more likely they are to engage in conflict with one another. Part of America’s success is the confusion factor of understanding how to assess somebody’s status.

Steven Pinker: That’s a profound observation. There are studies showing that violence is more common when people are confined to one pecking order, and all of their social worth depends on where they are in that hierarchy, whereas if they belong to multiple overlapping groups, they can always seek affirmations of worth elsewhere. For example, if I do something stupid when I’m driving, and someone gives me the finger and calls me an asshole, it’s not the end of the world: I think to myself, I’m a tenured professor at Harvard. On the other hand, if status among men in the street was my only source of worth in life, I might have road rage and pull out a gun. Modernity comprises a lot of things, and it’s hard to tease them apart. But I suspect that when you’re not confined to a village or a clan, and you can seek your fortunes in a wide world, that is a pacifying force for exactly that reason.

Think of the people you know. How many of them can compete on purely financial grounds? How many can compare to the chimps at the top of the financial heap without feeling like an utter failure, a miserable loser? Not many. I can’t think of anyone I know who wouldn’t be at least a little unhappy. Some of them are pretty well off, but it’s awfully hard to compare with billionaires in their department. There’s no way to prove that this version of subcultures is the right one (perhaps fragmenting the culture fragments the possible status), but when I look at simple models, this version seems plausible to me and to explain some deep trends like monogamy.

Subcultures set you free

If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself. Enjoy your achievements as well as your plans. Keep interested in your own career, however humble; it is a real possession in the changing fortunes of time.

Having a society in which an artist can mingle as social equals with the billionaire and admit the Nobel scientists and the philanthropist is fundamental to our mental health! If I’m a programmer, I don’t need to be competing with 7 billion people, and the few hundred billionaires, for self-esteem. I can just consider the computing community. Better yet, I might only have to consider the functional programming community, or perhaps just the Haskell programming community. Or to take another example: if I decide to commit to the English Wikipedia subculture, as it were, instead of American culture, I am no longer mentally dealing with 300 million competitors and threats; I am dealing with just a few thousand.

It is a more manageable tribe. It’s closer to the Dunbar number, which still applies online. Even if I’m on the bottom of the Wikipedia heap, that’s fine. As long as I know where I am! I don’t have to be a rich elite to be happy; a master craftsman is content, and a cat may look at a king.

Leaving a culture, and joining a subculture, is a way for the monkey mind to cope with the modern world.

(Source: sunrec)

Link: Forever Alone: Why Loneliness Matters in the Social Age

I got up and went over and looked out the window. I felt so lonesome, all of a sudden. I almost wished I was dead. Boy, did I feel rotten. I felt so damn lonesome. I just didn’t want to hang around any more. It made me too sad and lonesome.

— J.D. Salinger in Catcher in the Rye

Loneliness was a problem I experienced most poignantly in college. In the three years I spent at Carnegie Mellon, the crippling effects of loneliness slowly pecked away at my enthusiasm for learning and for life, until I was drowning in an endless depressive haze that never completely cleared until I left Pittsburgh.

It wasn’t for lack of trying either. At the warm behest of the orientation counselors, I joined just the right number of clubs, participated in most of the dorm activities, and tried to expand my social portfolio as much as possible.

None of it worked.

To the extent that I sought out CAPS (our student psych and counseling service) for help, the platitudes they offered as advice (“Just put yourself out there!”) only served to confirm my suspicion that loneliness isn’t a very visible problem. (After all, the cure for loneliness isn’t exactly something that could be prescribed. “Have you considered transferring?” they finally suggested, after exhausting their list of thought-terminating clichés. I graduated early instead.)

As prolonged loneliness took its toll, I became very unhappy—to put it lightly—and even in retrospect I have difficulty pinpointing a specific cause. It wasn’t that I didn’t know anyone or failed to make any friends, and it wasn’t that I was alonemore than I liked.

Sure, I could point my finger at the abysmally fickle weather patterns of Pittsburgh, or the pseudo-suburban bubble that envelops the campus. There might even be a correlation between my academic dissonance with computer science and my feelings of loneliness. I might also just be an extremely unlikable person.

For whatever the reason (or a confluence thereof) the reality remained that I struggled with loneliness throughout my time in college.

+++

I recall a conversation with my friend Dev one particular evening on the patio of our dormitory. It was the beginning of my junior and last year at CMU, and I had just finished throwing an ice cream party for the residents I oversaw as an RA.

“Glad to be back?” he asked as he plopped down on a lawn chair beside me.

“No, not really.”

The sun was setting, and any good feelings about the upcoming semester with it. We made small talk about the school in general, as he had recently transferred, but eventually Dev asked me if I was happy there.

“No, not really.”

“Why do you think you’re so miserable here?”

“I don’t know. A lot of things, I guess. But mostly because I feel lonely. Like I don’t belong, like I can’t relate to or connect with anyone on an emotional level. I haven’t made any quality relationships here that I would look back on with any fond memories. Fuck… I don’t know what to do.”

College, at least for me, was a harrowing exercise in how helplessly debilitating, hopelessly soul-crushing, and at times life-threatening loneliness could be. It’s a problem nobody talks about, and it’s been a subject of much personal relevance and interest.

Loneliness as a Health Problem

A recent article published on Slate outlines the hidden dangers of social isolation. Chronic loneliness, as Jessica Olien discovered, poses serious health risks that not only impact mental health but physiological well-being as well.

The lack of quality social relationships in a person’s life has been linked to an increased mortality risk comparable to smoking and alcohol consumption and exceeds the influence of other risk factors like physical inactivity and obesity. It’s hard to brush off loneliness as a character flaw or an ephemeral feeling when you realize it kills more people than obesity.

Research also shows that loneliness diminishes sleep quality and impairs physiological function, in some cases reducing immune function and boosting inflammation, which increases risk for diabetes and heart disease.

Why hasn’t loneliness gotten much attention as a medical problem? Olien shares the following observation:

As a culture we obsess over strategies to prevent obesity. We provide resources to help people quit smoking. But I have never had a doctor ask me how much meaningful social interaction I am getting. Even if a doctor did ask, it is not as though there is a prescription for meaningful social interaction.

As a society we look down upon those who admit to being lonely, we cast and ostracize them with labels like “loners” insofar as they prefer to hide behind shame and doubt rather than speak up. This dynamic only makes it harder to devise solutions to what is clearly a larger societal issue, and it certainly brings to question the effects of culture on our perception of loneliness as a problem.

Loneliness as a Culture Problem

Stephen Fry, in a blog post titled Only the Lonely which explains his suicide attempt last year, describes in detail his struggle with depression. His account offers a rare and candid glimpse into the reality of loneliness with which those afflicted often hide from the public:

Lonely? I get invitation cards through the post almost every day. I shall be in the Royal Box at Wimbledon and I have serious and generous offers from friends asking me to join them in the South of France, Italy, Sicily, South Africa, British Columbia and America this summer. I have two months to start a book before I go off to Broadway for a run of Twelfth Night there.

I can read back that last sentence and see that, bipolar or not, if I’m under treatment and not actually depressed, what the fuck right do I have to be lonely, unhappy or forlorn? I don’t have the right. But there again I don’t have the right not to have those feelings. Feelings are not something to which one does or does not have rights.

In the end loneliness is the most terrible and contradictory of my problems.

In the United States, approximately 60 million people, or 20% of the population, feel lonely. According to the General Social Survey, between 1985 and 2004, the number of people with whom the average American discusses important mattersdecreased from three to two, and the number with no one to discuss important matters with tripled.

Modernization has been cited as a reason for the intensification of loneliness in every society around the world, attributed to greater migration, smaller household sizes, and a larger degree of media consumption.

In Japan, loneliness is an even more pervasive, layered problem mired in cultural parochialisms. Gideon Lewis-Kraus pens a beautiful narrative on Harper’s in which he describes his foray into the world of Japanese co-sleeping cafés:

“Why do you think he came here, to the sleeping café?”

“He wanted five-second hug maybe because he had no one to hug. Japan ishaji culture. Shame. Is shame culture. Or maybe also is shyness. I don’t know why. Tokyo people … very alone. And he does not have … ” She thought for a second, shrugged, reached for her phone. “Please hold moment.”

She held it close to her face, multitouched the screen not with thumb and forefinger but with tiny forefinger and middle finger. I could hear another customer whispering in Japanese in the silk-walled cubicle at our feet. His co-sleeper laughed loudly, then laughed softly. Yukiko tapped a button and shone the phone at my face. The screen said COURAGE.

It took an enormous effort for me to come to terms with my losing battle with loneliness and the ensuing depression at CMU, and an even greater leap of faith to reach out for help. (That it was to no avail is another story altogether.) But what is even more disconcerting to me is that the general stigma against loneliness and mental health issues, hinging on an unhealthy stress culture, makes it hard for afflicted students to seek assistance at all.

As Olien puts it, “In a society that judges you based on how expansive your social networks appear, loneliness is difficult to fess up to. It feels shameful.”

To truly combat loneliness from a cultural angle, we need to start by examining our own fears about being alone and to recognize that as humans, loneliness is often symptomatic of our unfulfilled social needs. Most importantly, we need to accept that it’s okay to feel lonely. Fry, signing off on his heartfelt post, offers this insight:

Loneliness is not much written about (my spell-check wanted me to say that loveliness is not much written about—how wrong that is) but humankind is a social species and maybe it’s something we should think about more than we do.

Loneliness as a Technology Problem

Technology, and by extension media consumption in the Internet age, adds the most perplexing (and perhaps the most interesting) dimension to the loneliness problem. As it turns out, technology isn’t necessarily helping us feel more connected; in some cases, it makes loneliness worse.

The amount of time you spend on Facebook, as a recent study found, is inversely related to how happy you feel throughout the day.

Take a moment to watch this video.

It’s a powerful, sombering reminder that our growing dependence on technology to communicate has serious social repercussions, to which Cohen presents his central thesis:

We are lonely, but we’re afraid of intimacy, while the social networks offer us three gratifying fantasies: 1) That we can put our attention wherever we want it to be. 2) That we will always be heard. 3) That we will never have to be alone.

And that third idea, that we will never have to be alone, is central to changing our psyches. It’s shaping a new way of being. The best way to describe it is:

I share, therefore I am.

Public discourse on the cultural ramifications of technology is certainly not a recent development, and the general sentiment that our perverse obsession with sharing will be humanity’s downfall continues to echo in various forms around the web: articles proclaiming that Instagram is ruining people’s lives, the existence of a section on Reddit called cringepics where people congregate to ridicule things others post on the Internet, the increasing number of self-proclaimed “social media gurus” on Twitter, to name a few.

The signs seem to suggest we have reached a tipping point for “social” media that’s not very social on a personal level, but whether it means a catastrophic implosion or a gradual return to more authentic forms of interpersonal communications remains to be seen.

While technology has been a source of social isolation for many, it has the capacity to alleviate loneliness as well. A study funded by the online dating site eHarmony shows that couples who met online are less likely to divorce and achieve more marital satisfaction than those who met in real life.

The same model could potentially be applied to friendships, and it’s frustrating to see that there aren’t more startups leveraging this opportunity when the problem is so immediate and in need of solutions. It’s a matter of exposure and education on the truths of loneliness, and unfortunately we’re just not there yet.

+++

The perils of loneliness shouldn’t be overlooked in an increasingly hyperconnected world that often tells another story through rose-tinted lenses. Rather, the gravity of loneliness should be addressed and brought to light as a multifaceted problem, one often muted and stigmatized in our society. I learned firsthand how painfully real of a problem loneliness could be, and more should be done to spread its awareness and to help those affected.

“What do you think I should do?” I looked at Dev as the last traces of sunlight teetered over the top of Morewood Gardens. It was a rhetorical question—things weren’t about to get better.

“Find better people,” he replied.

I offered him a weak smile in return, but little did I know then how prescient those words were.

In the year that followed, I started a fraternity with some of the best kids I’d come to know (Dev included), graduated college and moved to San Francisco, made some of the best friends I’ve ever had, and never looked back, if only to remember, and remember well, that it’s never easy being lonely.

Link: Pandora's Vox

Carmen “humdog” Hermosillo’s essay Pandora’s Vox, an analysis of internet communities, remains startlingly accurate 20 years later. 

When I went into cyberspace I went into it thinking that it was a place like any other place and that it would be a human interaction like any other human interaction. I was wrong when I thought that. It was a terrible mistake. 



The very first understanding that I had that it was not a place like any place and that the interaction would be different was when people began to talk to me as though I were a man. When they wrote about me in the third person, they would say ‘he.’ it interested me to have people think I was ‘he’ instead of ‘she’ and so at first I did not say anything. I grinned and let them think I was ‘he.’ this went on for a little while and it was fun but after a while I was uncomfortable. Finally I said unto them that I, humdog, was a woman and not a man. This surprised them. At that moment I realized that the dissolution of gender-category was something that was happening everywhere, and perhaps it was only just very obvious on the net. This is the extent of my homage to Gender On The Net. 



I suspect that cyberspace exists because it is the purest manifestation of the mass (masse) as Jean Beaudrilliard described it. It is a black hole; it absorbs energy and personality and then re-presents it as spectacle. People tend to express their vision of the mass as a kind of imaginary parade of blue-collar workers, their muscle-bound arms raised in defiant salute. Sometimes in this vision they are holding wrenches in their hands. Anyway, this image has its origins in Marx and it is as Romantic as a dozen long-stemmed red roses. The mass is more like one of those faceless dolls you find in nostalgia-craft shops: limp, cute, and silent. When I say ‘cute’ I am including its macabre and sinister aspects within my definition. 



It is fashionable to suggest that cyberspace is some kind of _island of the blessed_ where people are free to indulge and express their Individuality. Some people write about cyberspace as though it were a 60′s utopia. In reality, this is not true. Major online services, like CompuServe and America online, regularly guide and censor discourse. Even some allegedly free-wheeling (albeit politically correct) boards like the WELL censor discourse. The difference is only a matter of the method and degree. What interests me about this, however, is that to the mass, the debate about freedom of expression exists only in terms of whether or not you can say fuck or look at sexually explicit pictures. I have a quaint view that makes me think that discussing the ability to write ‘fuck’ or worrying about the ability to look at pictures of sexual acts constitutes The Least Of Our Problems surrounding freedom of expression. 



Western society has a problem with appearance and reality. It wants to split them off from each other, make one more real than the other, and invest one with more meaning than it does the other. There is two people who have something to say about this: Nietzsche and Baudrilliard. I invoke his or her names in case somebody thinks I made this up. Nietzsche thinks that the conflict over these ideas cannot be resolved. Baudrilliard thinks that it was resolved and that this is how come some people think that communities can be virtual: we prefer simulation (simulacra) to reality. Image and simulacra exert tremendous power upon culture. And it is this tension that informs all the debates about Real and Not-Real that infect cyberspace with regards to identity, relationship, gender, discourse, and community. Almost every discussion in cyberspace, about cyberspace, boils down to some sort of debate about Truth-In-Packaging. 



Cyberspace is a mostly a silent place. In its silence it shows itself to be an expression of the mass. One might question the idea of silence in a place where millions of user-ids parade around like angels of light, looking to see whom they might, so to speak, consume. The silence is nonetheless present and it is most present, paradoxically at the moment that the user-id speaks. When the user-id posts to a board, it does so while dwelling within an illusion that no one is present. Language in cyberspace is a frozen landscape. 



I have seen many people spill their guts on-line, and I did so myself until, at last, I began to see that I had commoditized myself. Commodification means that you turn something into a product, which has a money-value. In the nineteenth century, commodities were made in factories, which Karl Marx called ‘the means of production.’ capitalists were people who owned the means of production, and the commodities were made by workers who were mostly exploited. I created my interior thoughts as a means of production for the corporation that owned the board I was posting to, and that commodity was being sold to other commodity/consumer entities as entertainment. That means that I sold my soul like a tennis shoe and I derived no profit from the sale of my soul. People who post frequently on boards appear to know that they are factory equipment and tennis shoes, and sometimes trade sends and email about how their contributions are not appreciated by management. 

As if this were not enough, all of my words were made immortal by means of tape backups. Furthermore, I was paying two bucks an hour for the privilege of commodifying and exposing myself. Worse still, I was subjecting myself to the possibility of scrutiny by such friendly folks as the FBI: they can, and have, downloaded pretty much whatever they damn well please. The rhetoric in cyberspace is liberation-speak. The reality is that cyberspace is an increasingly efficient tool of surveillance with which people have a voluntary relationship. 



Carmen “humdog” Hermosillo’s essay Pandora’s Vox, an analysis of internet communities, remains startlingly accurate 20 years later. You may recognize parts of it from Adam Curtis’ documentary All Watched Over by Machines of Loving Grace.

Proponents of so-called cyber-communities rarely emphasize the economic, business-mind nature of the community: many cyber-communities are businesses that rely upon the commodification of human interaction. They market their businesses by appeal to hysterical identification and fetishism no more or less than the corporations that brought us the two hundred dollar athletic shoe. Proponents of cyber- community do not often mention that these conferencing systems are rarely culturally or ethnically diverse, although they are quick to embrace the idea of cultural and ethnic diversity. They rarely address the whitebeard demographics of cyberspace except when these demographics conflict with the upward-mobility concerns of white, middle class females under the rubric of orthodox academic Feminism.

Link: Twitter: First Thought, Worst Thought

It’s fascinating and horrifying to observe the spectacles of humiliation generated by social media.

One of the strange and slightly creepy pleasures that I get from using Twitter is observing, in real time, the disappearance of words from my stream as they are deleted by their regretful authors. It’s a rare and fleeting sight, this emergency recall of language, and I find it touching, as though the person had reached out to pluck his words from the air before they could set about doing their disastrous work in the world, making their author seem boring or unfunny or ignorant or glib or stupid. And whenever this happens, I find myself wanting to know what caused this sudden reversal. What were the tweet’s defects? Was it a simple typo? Was there some fatal miscalculation of humor or analysis? Was it a clumsily calibrated subtweet? What, in other words, was the proximity to disaster? I, too, have deleted the occasional tweet; I know the sudden chill of having said something misjudged or stupid, the panicked fumble to strike it from the official record of utterance, and the furtive hope that nobody had time to read it.

Any act of writing creates conditions for the author’s possible mortification. There is, I think, a trace of shame in the very enterprise of tweeting, a certain low-level ignominy to asking a question that receives no response, to offering up a witticism that fails to make its way in the world, that never receives the blessing of being retweeted or favorited. The stupidity and triviality of this worsens, rather than alleviates, the shame, adding to the experience a kind of second-order shame: a shame about the shame. My point, I suppose, is that the possibility of embarrassment is ever-present with Twitter—it inheres in the form itself unless you’re the kind of charmed (or cursed) soul for whom embarrassment is never a possibility to begin with.

It’s fascinating and horrifying to observe the spectacles of humiliation generated by social media at seemingly decreasing intervals, to witness the speed and efficiency with which individuals are isolated and subjected to mass paroxysms of ridicule and condemnation. You may remember that moment, way back in the dying days of 2013, when, in the minutes before boarding a flight to South Africa, a P.R. executive named Justine Sacco tweeted “Going to Africa. Hope I don’t get AIDS. Just kidding! I’m white.” In the twelve hours that she spent en route to Cape Town, aloft and offline, she became the unknowing subject of a kind of ruinous flash-fame: her tweet was posted on Gawker and went viral, drawing the anger and derision of thousands of people who knew only two things about her: that she was the author of this twelve-word disaster of misfired irony and that she was the director of corporate communications for the massive media conglomerate I.A.C. There was a barrage of violent misogyny, terrible in its blunt force and grim inevitability. Somebody sourced Sacco’s flight details, at which point the hashtag #HasJustineLandedYet started doing a brisk trade on Twitter. Somebody else took it upon himself to interview her father at the airport and post the details to Twitter, for the instruction and delight of the hashtag’s followers. The New York Times covered the story. Sacco touched down in Cape Town oblivious to the various ways, bizarre and very real, in which her life had changed. She was, in the end, swiftly and publicly fired.

This was not a celebrity or a politician tweeting something racist or offensive; Sacco was unknown, so this was not a case of a public reputation set off course by a single revealing misstep. This misstep was her public reputation. She will likely be remembered as “that P.R. person who tweeted that awful racist joke that time”; her identity will always be tethered to those four smugly telegraphic sentences, to the memory of how they provided a lightning rod for an electrical storm of anger about heedless white privilege and ignorant racial assumptions. Whether she was displaying these qualities or making a botched attempt at a self-reflexive joke about them—an interpretation which, intentional fallacy be damned, I find pretty plausible—didn’t, in the end, have much bearing on the affair. She became a symbol of everything that is ugly and wrong about the way white people think and don’t think about people of color, about the way the privileged of the planet think and don’t think about the poor. As Roxane Gay put it in an essay on her ambivalence about the public shaming of Sacco: “The world is full of unanswered injustice and more often than not we choke on it. When you consider everything we have to fight, it makes sense that so many people rally around something like the hashtag #HasJustineLandedYet. In this one small way, we are, for a moment, less impotent.”

As Sacco’s flight made its way south, over the heads of the people in whose name the Internet had decided she should be punished, I found myself trying to imagine what she might have been thinking. It was likely, of course, that the tweet wasn’t on her mind at all, that she was thinking about meeting her family at the arrivals lounge in Cape Town, looking forward to the Christmas holiday she was going to spend with them. But then I began imagining that she might, after all, have been thinking of her last tweet, maybe even having second thoughts about it. As early as her takeoff from Heathrow, perhaps, right as the plane broke through the surface of network signals, leaving behind the possibility of tweet-deletion, she may have realized how people would react to her joke, that it might be taken as a reflection of her own corruption or stupidity or malice. By that point, it would have been too late to do anything about it, too late to pluck her words from the air.

And, of course, I wasn’t really imagining Justine Sacco, of whom I knew and still know next to nothing but, rather, myself in her situation: the gathering panic I would feel if it had been me up there, running through the possible interpretations of the awful joke I’d just made and could not unmake—the various things, true and false, it could be taken to reveal about me.

In his strange and unsettling book “Humiliation,” the poet and essayist Wayne Koestenbaum writes about the way in which public humiliation “excites” his empathy. “By imagining what they feel, or might feel,” he writes, “I learn something about what I already feel, what I, as a human being, was born sensing: that we all live on the edge of humiliation, in danger of being deported to that unkind country.” Justine Sacco is a deportee now; I’m trying to imagine what it must be like for her there in that unkind country, those twelve words repeating themselves mindlessly over and over again in her head, how the phrase “Just kidding!”—J.K.! J.K.!—must by now have lost all meaning or have taken on a whole new significance. In this mode of trial and punishment, I sometimes think of social media as being like the terrible apparatus at the center of Kafka’s “In the Penal Colony”: a mechanism of corrective torture, harrowing the letters of the transgression into the bodies of the condemned.

The weird randomness of this sudden mutation of person into meme is, in the end, what’s so haunting. This could just as well have happened to anyone—any of the thousands of people who say awful things on Twitter every day. It’s not that Sacco didn’t deserve to be taken to task, to be scorned for the clumsiness and hurtfulness of her joke; it’s that the corrective was so radically comprehensive and obliterating, and administered with such collective righteous giddiness. This is a new form of violence, a symbolic ritual of erasure where the condemned is made to stand for a whole class of person—to be cast, as an effigy of the world’s general awfulness, into a sudden abyss of fame.

Link: Now is Not Forever: The Ancient Recent Past

Sometimes the Internet surprises us with the past or, to be more precise, its own past. The other day my social media feed started to show the same clip over and over. It was one I had seen years before and forgotten about, back from the bottom of that overwhelming ocean of content available to us at any given moment. Why was it reappearing now, I wondered?

That’s a hard question to answer under any circumstances. My teenage daughter regularly shows me Internet discoveries that date from the mid-2000s. To her, they are fresh; to me, a reminder of just how difficult it is to predict what the storms of the information age will turn up. In the case of the clip I started seeing again the other day, however, the reemergence seemed less than random.

It’s a two-minute feature from a San Francisco television station about the electronic future of journalism, but from way back in 1981, long before the Internet as we know it came into focus. While there is a wide range of film and television from that era readily accessible to us, much of which can be consumed without being struck dumb by its datedness — Scarface or the first Star Wars trilogy, to name two obvious examples — its surviving news broadcasts seem uncanny. Factor in the subject matter of this one, predicting a future that already feels past to us, and the effect is greatly enhanced.

The more I kept seeing this clip in my feed, though, the more clear it became that its uncanniness didn’t just derive from the original feature’s depiction of primitive modems and computer monitors — and a Lady Di hairsyle — but also the fact that it had returned from the depths of the Internet to remind us, once more, that we did see this world coming.

The information age is doing strange things to our sense of history. If you drive in the United States, particularly in warm-weather places like California or Florida, you won’t have to look too hard to see cars from the 1980s still on the road. But a computer from that era seems truly ancient, as out of sync with our own times as a horse and buggy.

Stranger still is the feeling of datedness that pervades the Internet’s own history. For someone my daughter’s age, imagining life before YouTube is as unsettling a prospect as imagining life before indoor plumbing. And yet, even though she was only seven when the site debuted, she was already familiar with the Internet before then.

But it isn’t just young people who feel cut off from the Internet that existed prior to contemporary social media. Even though I can go on the Wayback Machine to check out sites I was visiting in the 1990s; even though I contributed to one of the first Internet publications, Bad Subjects: Political Education For Everyday Life, and can still access its content with ease; even though I know firsthand what it was like before broadband, when I would wait minutes for a single news story to load, my memories still seem to fail me. I remember, but dimly. I can recall experiences from pre-school in vivid detail, yet struggle to flesh out my Internet past from a decade ago, before I started using Gmail.

What the clip that resurfaced the other day makes clear is that history is more subjective than ever. Some parts seem to be moving at more or less the same pace that they did decades or even centuries ago. But others, particularly those that focus on computer technology, appear ten or even a hundred times as fast. If you don’t believe me, try picking up the mobile phone you used in 2008.

When he was working on the Passagenwerk, his sprawling project centered on nineteenth-century Parisian shopping arcades, Walter Benjamin made special note of how outdated those proto-malls seemed, less than a century after they had first appeared. These days, the depths of the Internet are full of such places, dormant pages that unnerve us with their “ancient” character, even though they are less than a decade old.

As Mark Fisher brilliant explains in his book Capitalist Realism, we live at a time when it is easier to imagine the end of the world than the end of capitalism. But there are plenty of people who have just as much difficulty imagining the end of Facebook, even though some of them were on MySpace and Friendster before it. That’s what makes evidence like the clip I’ve been discussing here so important. We need to be reminded that we are capable of living different lives, that we have, in fact, already lived them, so that we can turn our attention to living the lives we actually want to lead.

Link: Neil Postman on Cyberspace (1995)

Author and media scholar Neil Postman, head of the Culture and Communications at New York University, encourages caution when entering cyberspace. His book, Technopoly, the Surrender of Culture to Technology, puts the computer in historical perspective.

Neil Postman, thank you for joining us. How do you define cyberspace?

Cyberspace is a metaphorical idea which is supposed to be the space where your consciousness is located when you’re using computer technology on the Internet, for example, and I’m not entirely sure it’s such a useful term, but I think that’s what most people mean by it.

How does that strike you, I mean, that your consciousness is located somewhere other than in your body?

Well, the most interesting thing about the term for me is that it made me begin to think about where one’s consciousness is when interacting with other kinds of media, for example, even when you’re reading, where, where are you, what is the space in which your consciousness is located, and when you’re watching television, where, where are you, who are you, because people say with the Internet, for example, it’s a little different in that you’re always interacting or most of the time with another person. And when you’re in cyberspace, I suppose you can be anyone you want, and I think as this program indicates, it’s worth, it’s worth talking about because this is a new idea and something very different from face-to-face co-presence with another human being.

Do you think this is a good thing, or a bad thing, or you haven’t decided?

Well, no, I’ve mostly—(laughing)—I’ve mostly decided that new technology of this kind or any other kind is a kind of Faustian bargain. It always gives us something important but it also takes away something that’s important. That’s been true of the alphabet and the printing press and telegraphy right up through the computer. For instance, when I hear people talk about the information superhighway, it will become possible to shop at home and bank at home and get your texts at home and get entertainment at home and so on, I often wonder if this doesn’t signify the end of any meaningful community life. I mean, when two human beings get together, they’re co-present, there is built into it a certain responsibility we have for each other, and when people are co-present in family relationships and other relationships, that responsibility is there. You can’t just turn off a person. On the Internet, you can. And I wonder if this doesn’t diminish that built-in, human sense of responsibility we have for each other. Then also one wonders about social skills; that after all, talking to someone on the Internet is a different proposition from being in the same room with someone—not in terms of responsibility but just in terms of revealing who you are and discovering who the other person is. As a matter of fact, I’m one of the few people not only that you’re likely to interview but maybe ever meet who is opposed to the use of personal computers in school because school, it seems to me, has always largely been about how to learn as part of a group. School has never really been about individualized learning but about how to be socialized as a citizen and as a human being, so that we, we have important rules in school, always emphasizing the fact that one is part of a group. And I worry about the personal computer because it seems, once again to emphasize individualized learning, individualized activity.

What images come to your mind when you, when you think about what our lives will be like in cyberspace?

Well, the, the worst images are of people who are overloaded with information which they don’t know what to do with, have no sense of what is relevant and what is irrelevant, people who become information junkies.

What do you mean? How do you mean that?

Well, the problem in the 19th century with information was that we lived in a culture of information scarcity and so humanity addressed that problem beginning with photography and telegraphy and the–in the 1840s. We tried to solve the problem of overcoming the limitations of space, time, and form. And for about a hundred years, we worked on this problem, and we solved it in a spectacular way. And now, by solving that problem, we created a new problem, that people have never experienced before, information glut, information meaninglessness, information incoherence. I mean, if there are children starving in Somalia or any other place, it’s not because of insufficient information. And if crime is rampant in the streets in New York and Detroit and Chicago or wherever, it’s not because of insufficient information. And if people are getting divorced and mistreating their children and their sexism and racism are blights on our social life, none of that has anything to do with inadequate information. Now, along comes cyberspace and the information superhighway, and everyone seems to have the idea that, ah, here we can do it; if only we can have more access to more information faster and in more diverse forms at long last, we’ll be able to solve these problems. And I don’t think it has anything to do with it.

Do you believe that this–that the fact that people are more connected globally will lead to a greater degree of homogenization of the global society?

Here’s the puzzle about that, Charlayne. When everyone was–when McLuhan talked about the world becoming a global village and, and when people ask, as you did, about how connections can be made, everyone seemed to think that the world would become in, in some good sense more homogenous. But we seem to be experiencing the opposite. I mean, all over the world, we see a kind of reversion to tribalism. People are going back to their tribal roots in order to find a sense of identity. I mean, we see it in Russia, in Yugoslavia, in Canada, in the United States, I mean, in our own country. Why is that every group now not only is more aware of its own grievances but seems to want its own education? You know, we want an Afro-centric curriculum and a Korean-centric curriculum, and a Greek-centered curriculum. What is it about all this globalization of communication that is making people return to more–to smaller units of identity? It’s a puzzlement.

Well, what do you think the people, society should be doing to try and anticipate these negatives and be able to do something about them?

I think they should–everyone should be sensitive to certain questions. For example, when a new–confronted with a new technology, whether it’s a cellular phone or high definition television or cyberspace or Internet, the question–one question should be: What is the problem to which this technology is a solution? And the second question would be: Whose problem is it actually? And the third question would be: If there is a legitimate problem here that is solved by the technology, what other problems will be created by my using this technology? About six months ago, I bought a new Honda Accord, and the salesman told me that it had cruise control. And I asked him, “What is the problem to which cruise control is the solution?” By the way, there’s an extra charge for cruise control. And he said no one had ever asked him that before but then he said, “Well, it’s the problem of keeping your foot on the gas.” And I said, “Well, I’ve been driving for 35 years. I’ve never found that to be a problem.” I mean, am I using this technology, or is it using me, because in a technological culture, it is very easy to be swept up in the enthusiasm for technology, and of course, all the technophiles around, all the people who adore technology and are promoting it everywhere you turn.

Well, Neil Postman, thank you for all of your cautions.

Link: The Disconnectionists

“Unplugging” from the Internet isn’t about restoring the self so much as it about stifling the desire for autonomy that technology can inspire.

Once upon a pre-digital era, there existed a golden age of personal authenticity, a time before social-media profiles when we were more true to ourselves, when the sense of who we are was held firmly together by geographic space, physical reality, the visceral actuality of flesh. Without Klout-like metrics quantifying our worth, identity did not have to be oriented toward seeming successful or scheming for attention.

According to this popular fairytale, the Internet arrived and real conversation, interaction, identity slowly came to be displaced by the allure of the virtual — the simulated second life that uproots and disembodies the authentic self in favor of digital status-posturing, empty interaction, and addictive connection. This is supposedly the world we live in now, as a recent spate of popular books, essays, wellness guides, and viral content suggest. Yet they have hope: By casting off the virtual and re-embracing the tangible through disconnecting and undertaking a purifying “digital detox,” one can reconnect with the real, the meaningful — one’s true self that rejects social media’s seductive velvet cage.

That retelling may be a bit hyperbolic, but the cultural preoccupation is inescapable. How and when one looks at a glowing screen has generated its own pervasive popular discourse, with buzzwords like digital detox, disconnection, andunplugging to address profound concerns over who is still human, who is having true experiences, what is even “real” at all. A few examples: In 2013, Paul Miller of tech-news website The Verge and Baratunde Thurston, a Fast Company columnist, undertook highly publicized breaks from the Web that they described in intimate detail (and ultimately posted on the Web). Videos like “I Forgot My Phone” that depict smartphone users as mindless zombies missing out on reality have gone viral, and countless editorial writers feel compelled to moralize broadly about the minutia of when one checks their phone. But what they are saying may matter less than the fact that they feel required to say it. As Diane Lewis states in an essay for Flow, an online journal about new media,

The question of who adjudicates the distinction between fantasy and reality, and how, is perhaps at the crux of moral panics over immoderate media consumption.

It is worth asking why these self-appointed judges have emerged, why this moral preoccupation with immoderate digital connection is so popular, and how this mode of connection came to demand such assessment and confession, at such great length and detail. This concern-and-confess genre frames digital connection as something personally debasing, socially unnatural despite the rapidity with which it has been adopted. It’s depicted as a dangerous desire, an unhealthy pleasure, an addictive toxin to be regulated and medicated. That we’d be concerned with how to best use (or not use) a phone or a social service or any new technological development is of course to be expected, but the way the concern with digital connection has manifested itself in such profoundly heavy-handed ways suggests in the aggregate something more significant is happening, to make so many of us feel as though our integrity as humans has suddenly been placed at risk.

+++

The conflict between the self as social performance and the self as authentic expression of one’s inner truth has roots much deeper than social media. It has been a concern of much theorizing about modernity and, if you agree with these theories, a mostly unspoken preoccupation throughout modern culture.

Whether it’s Max Weber on rationalization, Walter Benjamin on aura, Jacques Ellul on technique, Jean Baudrillard on simulations, or Zygmunt Bauman and the Frankfurt School on modernity and the Enlightenment, there has been a long tradition of social theory linking the consequences of altering the “natural” world in the name of convenience, efficiency, comfort, and safety to draining reality of its truth or essence. We are increasingly asked to make various “bargains with modernity” (to use Anthony Giddens’s phrase) when encountering and depending on technologies we can’t fully comprehend. The globalization of countless cultural dispositions had replaced the pre-modern experience of cultural order with an anomic, driftless lack of understanding, as described by such classical sociologists as Émile Durkheim and Georg Simmel and in more contemporary accounts by David Riesman (The Lonely Crowd), Robert Putnam (Bowling Alone), and Sherry Turkle (Alone Together).

I drop all these names merely to suggest the depth of modern concern over technology replacing the real with something unnatural, the death of absolute truth, of God. This is especially the case in identity theory, much of which is founded on the tension between seeing the self as having some essential soul-like essence versus its being a product of social construction and scripted performance. From Martin Heidegger’s “they-self,” Charles Horton Cooley’s “looking glass self,” George Herbert Mead’s discussion of the “I” and the “me,”  Erving Goffman’s dramaturgical framework of self-presentation on the “front stage,” Michel Foucault’s “arts of existence,” to Judith Butler’s discussion of identity “performativity,” theories of the self and identity have long recognized the tension between the real and the pose. While so often attributed to social media, such status-posturing performance — “success theater” — is fundamental to the existence of identity.

These theories also share an understanding that people in Western society are generally uncomfortable admitting that who they are might be partly, or perhaps deeply, structured and performed. To be a “poser” is an insult; instead common wisdom is “be true to yourself,” which assumes there is a truth of your self. Digital-austerity discourse has tapped into this deep, subconscious modern tension, and brings to it the false hope that unplugging can bring catharsis.

The disconnectionists see the Internet as having normalized, perhaps even enforced, an unprecedented repression of the authentic self in favor of calculated avatar performance. If we could only pull ourselves away from screens and stop trading the real for the simulated, we would reconnect with our deeper truth. In describing his year away from the Internet, Paul Miller writes,

‘Real life,’ perhaps, was waiting for me on the other side of the web browser … It seemed then, in those first few months, that my hypothesis was right. The internet had held me back from my true self, the better Paul. I had pulled the plug and found the light.

Baratunde Thurston writes,

my first week sans social media was deeply, happily, and personally social […] I bought a new pair of glasses and shared my new face with the real people I spent time with.

Such rhetoric is common. Op-eds, magazine articles, news programs, and everyday discussion frames logging off as reclaiming real social interaction with your realself and other real people. The R in IRL. When the digital is misunderstood as exclusively “virtual,” then pushing back against the ubiquity of connection feels like a courageous re-embarking into the wilderness of reality. When identity performance can be regarded as a by-product of social media, then we have a new solution to the old problem of authenticity: just quit. Unplug — your humanity is at stake! Click-bait and self-congratulation in one logical flaw.

The degree to which inauthenticity seems a new, technological problem is the degree to which I can sell you an easy solution. Reducing the complexity of authenticity to something as simple as one’s degree of digital connection affords a solution the self-help industry can sell. Researcher Laura Portwood-Stacer describes this as that old “neoliberal responsibilization we’ve seen in so many other areas of ‘ethical consumption,’ ” turning social problems into personal ones with market solutions and fancy packaging.

Social media surely change identity performance. For one, it makes the process more explicit. The fate of having to live “onstage,” aware of being an object in others’ eyes rather than a special snowflake of spontaneous, uncalculated bursts of essential essence is more obvious than ever — even perhaps for those already highly conscious of such objectification. But that shouldn’t blind us to the fact that identity theater is older than Zuckerberg and doesn’t end when you log off. The most obvious problem with grasping at authenticity is that you’ll never catch it, which makes the social media confessional both inevitable as well as its own kind of predictable performance.

To his credit, Miller came to recognize by the end of his year away from the Internet that digital abstinence made him no more real than he always had been. Despite his great ascetic effort, he could not reach escape velocity from the Internet. Instead he found an “inextricable link” between life online and off, between flesh and data, imploding these digital dualisms into a new starting point that recognizes one is never entirely connected or disconnected but deeply both. Calling the digital performed and virtual to shore up the perceived reality of what is “offline” is one more strategy to renew the reification of old social categories like the self, gender, sexuality, race and other fictions made concrete. The more we argue that digital connection threatens the self, the more durable the concept of the self becomes.

+++

The obsession with authenticity has at its root a desire to delineate the “normal” and enforce a form of “healthy” founded in supposed truth. As such, it should be no surprise that digital-austerity discourse grows a thin layer of medical pathologization. That is, digital connection has become an illness. Not only has the American Psychiatric Association looked into making “Internet-use disorder” a DSM-official condition, but more influentially, the disconnectionists have framed unplugging as a health issue, touting the so-called digital detox. For example, so far in 2013, The Huffington Post has run 25 articles tagged with “digital detox,” including “The Amazing Discovery I Made When My Phone Died,” “How a Weekly Digital Detox Changed My Life,” “Why We’re So Hooked on Technology (And How to Unplug).” A Los Angeles Times article explored whether the presence of digital devices “contaminates the purity” of Burning Man. Digital detox has even been added to the Oxford Dictionary Online. Most famous, due to significant press coverage, is Camp Grounded, which bills itself as a “digital detox tech-free personal wellness retreat.” Atlantic senior editor Alexis Madrigal has called it “a pure distillation of post-modern technoanxiety.” On its grounds the camp bans not just electronic devices but also real names, real ages, and any talk about one’s work. Instead, the camp has laughing contests.

The wellness framework inherently pathologizes digital connection as contamination, something one must confess, carefully manage, or purify away entirely. Remembering Michel Foucault’s point that diagnosing what is ill is always equally about enforcing what is healthy, we might ask what new flavor of normal is being constructed by designating certain kinds of digital connection as a sickness. Similar to madness, delinquency, sexuality, or any of the other areas whose pathologizing toward normalization Foucault traced, digitality — what is “online,” and how should one appropriately engage that distinction — has become a productive concept around which to organize the control and management of new desires and pleasures. The desire to be heard, seen, informed via digital connection in all its pleasurable and distressing, dangerous and exciting ways comes to be framed as unhealthy, requiring internal and external policing. Both the real/virtual and toxic/healthy dichotomies of digital austerity discourse point toward a new type of organization and regulation of pleasure, a new imposition of personal techno-responsibility, especially on those who lack autonomy over how and when to use technology. It’s no accident that the focus in the viral “I Forgot My Phone” video wasn’t on the many people distracted by seductive digital information but the woman who forgets her phone, who is “free” to experience life — the healthy one is the object of control, not the zombies bitten by digitality.

The smartphone is a machine, but it is still deeply part of a network of blood; an embodied, intimate, fleshy portal that penetrates into one’s mind, into endless information, into other people. These stimulation machines produce a dense nexus of desires that is inherently threatening. Desire and pleasure always contain some possibility (a possibility — it’s by no means automatic or even likely) of disrupting the status quo. So there is always much at stake in their control, in attempts to funnel this desire away from progressive ends and toward reinforcing the values that support what already exists. Silicon Valley has made the term “disruption” a joke, but there is little disagreement that the eruption of digitality does create new possibilities, for better or worse. Touting the virtue of austerity puts digital desire to work strictly in maintaining traditional understandings of what is natural, human, real, healthy, normal. The disconnectionists establish a new set of taboos as a way to garner distinction at the expense of others, setting their authentic resistance against others’ unhealthy and inauthentic being.

This explains the abundance of confessions about social media compulsion that intimately detail when and how one connects. Desire can only be regulated if it is spoken about. To neutralize a desire, it must be made into a moral problem we are constantly aware of: Is it okay to look at a screen here? For how long? How bright can it be? How often can I look? Our orientation to digital connection needs to become a minor personal obsession. The true narcissism of social media isn’t self-love but instead our collective preoccupation with regulating these rituals of connectivity. Digital austerity is a police officer downloaded into our heads, making us always self-aware of our personal relationship to digital desire.

Of course, digital devices shouldn’t be excused from the moral order — nothing should or could be. But too often discussions about technology use are conducted in bad faith, particularly when the detoxers and disconnectionists and digital-etiquette-police seem more interested in discussing the trivial differences of when and how one looks at the screen rather than the larger moral quandaries of what one is doing with the screen. But the disconnectionists’ selfie-help has little to do with technology and more to do with enforcing a traditional vision of the natural, healthy, and normal. Disconnect. Take breaks. Unplug all you want. You’ll have different experiences and enjoy them, but you won’t be any more healthy or real.

Link: Why Women Aren't Welcome on the Internet

“Ignore the barrage of violent threats and harassing messages that confront you online every day.” That’s what women are told. But these relentless messages are an assault on women’s careers, their psychological bandwidth, and their freedom to live online. We have been thinking about Internet harassment all wrong.

[…] The examples are too numerous to recount, but like any good journalist, I keep a running file documenting the most deranged cases. There was the local cable viewer who hunted down my email address after a television appearance to tell me I was “the ugliest woman he had ever seen.” And the group of visitors to a “men’s rights” site who pored over photographs of me and a prominent feminist activist, then discussed how they’d “spend the night with” us. (“Put em both in a gimp mask and tied to each other 69 so the bitches can’t talk or move and go round the world, any old port in a storm, any old hole,” one decided.) And the anonymous commenter who weighed in on one of my articles: “Amanda, I’ll fucking rape you. How does that feel?”

None of this makes me exceptional. It just makes me a woman with an Internet connection. Here’s just a sampling of the noxious online commentary directed at other women in recent years. To Alyssa Royse, a sex and relationships blogger, for saying that she hated The Dark Knight: “you are clearly retarded, i hope someone shoots then rapes you.” To Kathy Sierra, a technology writer, for blogging about software, coding, and design: “i hope someone slits your throat and cums down your gob.” To Lindy West, a writer at the women’s website Jezebel, for critiquing a comedian’s rape joke: “I just want to rape her with a traffic cone.” To Rebecca Watson, an atheist commentator, for blogging about sexism in the skeptic community: “If I lived in Boston I’d put a bullet in your brain.” To Catherine Mayer, a journalist at Time magazine, for no particular reason: “A BOMB HAS BEEN PLACED OUTSIDE YOUR HOME. IT WILL GO OFF AT EXACTLY 10:47 PM ON A TIMER AND TRIGGER DESTROYING EVERYTHING.”

A woman doesn’t even need to occupy a professional writing perch at a prominent platform to become a target. According to a 2005 report by the Pew Research Center, which has been tracking the online lives of Americans for more than a decade, women and men have been logging on in equal numbers since 2000, but the vilest communications are still disproportionately lobbed at women. We are more likely to report being stalked and harassed on the Internet—of the 3,787 people who reported harassing incidents from 2000 to 2012 to the volunteer organizationWorking to Halt Online Abuse, 72.5 percent were female. Sometimes, the abuse can get physical: A Pew survey reported that five percent of women who used the Internet said “something happened online” that led them into “physical danger.” And it starts young: Teenage girls are significantly more likely to be cyberbullied than boys. Just appearing as a woman online, it seems, can be enough to inspire abuse. In 2006, researchers from the University of Maryland set up a bunch of fake online accounts and then dispatched them into chat rooms. Accounts with feminine usernames incurred an average of 100 sexually explicit or threatening messages a day. Masculine names received 3.7.

There are three federal laws that apply to cyberstalking cases; the first was passed in 1934 to address harassment through the mail, via telegram, and over the telephone, six decades after Alexander Graham Bell’s invention. Since the initial passage of the Violence Against Women Act, in 1994, amendments to the law have gradually updated it to apply to new technologies and to stiffen penalties against those who use them to abuse. Thirty-four states have cyberstalking laws on the books; most have expanded long-standing laws against stalking and criminal threats to prosecute crimes carried out online.

But making quick and sick threats has become so easy that many say the abuse has proliferated to the point of meaninglessness, and that expressing alarm is foolish. Reporters who take death threats seriously “often give the impression that this is some kind of shocking event for which we should pity the ‘victims,’” my colleague Jim Pagels wrote in Slate this fall, “but anyone who’s spent 10 minutes online knows that these assertions are entirely toothless.” On Twitter, he added, “When there’s no precedent for physical harm, it’s only baseless fear mongering.” My friend Jen Doll wrote, at The Atlantic Wire, “It seems like that old ‘ignoring’ tactic your mom taught you could work out to everyone’s benefit…. These people are bullying, or hope to bully. Which means we shouldn’t take the bait.” In the epilogue to her book The End of Men, Hanna Rosin—an editor at Slate—argued that harassment of women online could be seen as a cause for celebration. It shows just how far we’ve come. Many women on the Internet “are in positions of influence, widely published and widely read; if they sniff out misogyny, I have no doubt they will gleefully skewer the responsible sexist in one of many available online outlets, and get results.”

So women who are harassed online are expected to either get over ourselves or feel flattered in response to the threats made against us. We have the choice to keep quiet or respond “gleefully.”

But no matter how hard we attempt to ignore it, this type of gendered harassment—and the sheer volume of it—has severe implications for women’s status on the Internet. Threats of rape, death, and stalking can overpower our emotional bandwidth, take up our time, and cost us money through legal fees, online protection services, and missed wages. I’ve spent countless hours over the past four years logging the online activity of one particularly committed cyberstalker, just in case. And as the Internet becomes increasingly central to the human experience, the ability of women to live and work freely online will be shaped, and too often limited, by the technology companies that host these threats, the constellation of local and federal law enforcement officers who investigate them, and the popular commentators who dismiss them—all arenas that remain dominated by men, many of whom have little personal understanding of what women face online every day.

+++

This summer, Caroline Criado-Perez became the English-speaking Internet’s most famous recipient of online threats after she petitioned the British government to put more female faces on its bank notes. (When the Bank of England announced its intentions to replace social reformer Elizabeth Fry with Winston Churchill on the £5 note, Criado-Perez made the modest suggestion that the bank make an effort to feature at least one woman who is not the Queen on any of its currency.) Rape and death threats amassed on her Twitter feed too quickly to count, bearing messages like “I will rape you tomorrow at 9 p.m … Shall we meet near your house?”

Then, something interesting happened. Instead of logging off, Criado-Perez retweeted the threats, blasting them out to her Twitter followers. She called up police and hounded Twitter for a response. Journalists around the world started writing about the threats. As more and more people heard the story, Criado-Perez’sfollower count skyrocketed to near 25,000. Her supporters joined in urging British police and Twitter executives to respond.

Under the glare of international criticism, the police and the company spent the next few weeks passing the buck back and forth. Andy Trotter, a communications adviser for the British police, announced that it was Twitter’s responsibility to crack down on the messages. Though Britain criminalizes a broader category of offensive speech than the U.S. does, the sheer volume of threats would be too difficult for “a hard-pressed police service” to investigate, Trotter said. Police “don’t want to be in this arena.” It diverts their attention from “dealing with something else.”

Meanwhile, Twitter issued a blanket statement saying that victims like Criado-Perez could fill out an online form for each abusive tweet; when Criado-Perez supporters hounded Mark Luckie, the company’s manager of journalism and news, for a response, he briefly shielded his account, saying that the attention had become “abusive.” Twitter’s official recommendation to victims of abuse puts the ball squarely in law enforcement’s court: “If an interaction has gone beyond the point of name calling and you feel as though you may be in danger,” it says, “contact your local authorities so they can accurately assess the validity of the threat and help you resolve the issue offline.”

In the weeks after the flare-up, Scotland Yard confirmed the arrest of three men. Twitter—in response to several online petitions calling for action—hastened the rollout of a “report abuse” button that allows users to flag offensive material. And Criado-Perez went on receiving threats. Some real person out there—or rather, hundreds of them—still liked the idea of seeing her raped and killed.

+++

The Internet is a global network, but when you pick up the phone to report an online threat, whether you are in London or Palm Springs, you end up face-to-face with a cop who patrols a comparatively puny jurisdiction. And your cop will probably be a man: According to the U.S. Bureau of Justice Statistics, in 2008, only 6.5 percent of state police officers and 19 percent of FBI agents were women. The numbers get smaller in smaller agencies. And in many locales, police work is still a largely analog affair: 911 calls are immediately routed to the local police force; the closest officer is dispatched to respond; he takes notes with pen and paper.

After Criado-Perez received her hundreds of threats, she says she got conflicting instructions from police on how to report the crimes, and was forced to repeatedly “trawl” through the vile messages to preserve the evidence. “I can just about cope with threats,” she wrote on Twitter. “What I can’t cope with after that is the victim-blaming, the patronising, and the police record-keeping.” Last year, the American atheist blogger Rebecca Watson wrote about her experience calling a series of local and national law enforcement agencies after a man launched a website threatening to kill her. “Because I knew what town [he] lived in, I called his local police department. They told me there was nothing they could do and that I’d have to make a report with my local police department,” Watson wrote later. “[I] finally got through to someone who told me that there was nothing they could do but take a report in case one day [he] followed through on his threats, at which point they’d have a pretty good lead.”

The first time I reported an online rape threat to police, in 2009, the officer dispatched to my home asked, “Why would anyone bother to do something like that?” and declined to file a report. In Palm Springs, the officer who came to my room said, “This guy could be sitting in a basement in Nebraska for all we know.” That my stalker had said that he lived in my state, and had plans to seek me out at home, was dismissed as just another online ruse.

Link: Evgeny Morozov: Texting Toward Utopia

Does the Internet spread democracy?

In 1989 Ronald Reagan proclaimed that “The Goliath of totalitarianism will be brought down by the David of the microchip”; later, Bill Clinton compared Internet censorship to “trying to nail Jell–O to the wall”; and in 1999 George W. Bush (not John Lennon) asked us to “imagine if the Internet took hold in China. Imagine how freedom would spread.”

Such starry–eyed cyber–optimism suggested a new form of technological determinism according to which the Internet would be the hammer to nail all global problems, from economic development in Africa to threats of transnational terrorism in the Middle East. Even so shrewd an operator as Rupert Murdoch yielded to the digital temptation: “Advances in the technology of telecommunications have proved an unambiguous threat to totalitarian regimes everywhere,” he claimed. Soon after, Murdoch bowed down to the Chinese authorities, who threatened his regional satellite TV business in response to this headline–grabbing statement.

Some analysts did not jump on the bandwagon. The restrained tone of one 2003 report stood in marked contrast to prevailing cyber–optimism. The Carnegie Endowment for International Peace’s, “Open Networks, Closed Regimes: The Impact of the Internet on Authoritarian Rule,” warned: “Rather than sounding the death knell for authoritarianism, the global diffusion of the Internet presents both opportunity and challenge for authoritarian regimes.” Surveying diverse regimes from Singapore to Cuba, the report concluded that the political impact of the Internet would vary with a country’s social and economic circumstances, its political culture, and the peculiarities of its national Internet infrastructure.

Carnegie’s report appeared in the pre–YouTube, –Facebook, –MySpace darkness, so it was easy to overlook the rapidly falling costs of self–publishing and coordination and the implications for online interaction and collaboration, from political networking to Wikipedia. Still harder was to predict the potential effect of the Internet and mobile technology on economic development in the world’s poorest regions, where they currently provide much–needed banking infrastructure (for example, by using unspent air credit on mobile phones as currency), create new markets, introduce educational opportunities, and help to spread information about prevention and treatment of diseases. And hopes remain that the fruits of faster economic development, born of new information technologies, might also be good for democracy.

It is thus tempting to embrace the earlier cyber–optimism, trace the success of many political and democratic initiatives around the globe to the coming of Web 2.0, and dismiss the misgivings of the Carnegie report. Could it be that changes in the Web over the past six years—especially the rise of social networking, blogging, and video and photo sharing—represent the flowering of the Internet’s democratizing potential? This thesis seems to explain the dynamics of current Internet censorship: sites that feature user–generated content—Facebook, YouTube, Blogger—are especially unpopular with authoritarian regimes. A number of academic and popular books on the subject point to nothing short of a revolution, both in politics and information (see, for example, Antony Loewenstein’s The Blogging Revolution or Elizabeth Hanson’s The Information Revolution and World Politics, both published last year). Were the cyber–optimists right after all? Does the Internet spread freedom?

The answer to this question substantially depends on how we measure “freedom.” It is safe to say that the Internet has significantly changed the flow of information in and out of authoritarian states. While Internet censorship remains a thorny issue and, unfortunately, more widespread than it was in 2003, it is hard to ignore the wealth of digital content that has suddenly become available to millions of Chinese, Iranians, or Egyptians. If anything the speed and ease of Internet publishing have made many previous modes of samizdat obsolete; the emerging generation of dissidents may as well choose Facebook and YouTube as their headquarters and iTunes and Wikipedia as their classrooms.

Many such dissenters have, indeed, made great use of the Web. In Ukraine young activists relied on new–media technologies to mobilize supporters during the Orange Revolution. Colombian protesters used Facebook to organize massive rallies against FARC, the leftist guerrillas. The shocking and powerful pictures that surfaced from Burma during the 2007 anti–government protests—many of them shot by local bloggers with cell phones—quickly traveled around the globe. Democratic activists in Robert Mugabe’s Zimbabwe used the Web to track vote rigging in last year’s elections and used mobile phones to take photos of election results that were temporarily displayed outside the voting booths (later, a useful proof of the irregularities). Plenty of other examples—from Iran, Egypt, Russia, Belarus, and, above all, China—attest to the growing importance of technology in facilitating dissent.

Regime change by text messaging may seem realistic in cyberspace, but no dictators have been toppled via Second Life.

But drawing conclusions about the democratizing nature of the Internet may still be premature. The major challenge in understanding the relationship between democracy and the Internet— aside from developing good measures of democratic improvement—has been to distinguish cause and effect. That is always hard, but it is especially difficult in this case because the grandiose promise of technological determinism—the idealistic belief in the Internet’s transformative power—has often blinded even the most sober analysts.

Consider the arguments that ascribe Barack Obama’s electoral success, in part, to his team’s mastery of databases, online fundraising, and social networking. Obama’s use of new media is bound to be the subject of many articles and books. But to claim the primacy of technology over politics would be to disregard Obama’s larger–than–life charisma, the legacy of the stunningly unpopular Bush administration, the ramifications of the global financial crisis, and John McCain’s choice of Sarah Palin as a running mate. Despite the campaign’s considerable Web savvy, one cannot grant much legitimacy to the argument that it earned Obama his victory.

Yet, we are seemingly willing to resort to such technological determinism in the international context. For example, discussions of the Orange Revolution have assigned a particularly important role to text messaging. This is how a 2007 research paper, “The Role of Digital Networked Technologies in the Ukrainian Orange Revolution,” by Harvard’s Berkman Center for Internet and Society described the impact of text messaging, or SMS:

By September 2004, Pora [the opposition’s youth movement] had created a series of stable political networks throughout the country, including 150 mobile groups responsible for spreading information and coordinating election monitoring, with 72 regional centers and over 30,000 registered participants. Mobile phones played an important role for this mobile fleet of activists. Pora’s post–election report states, ‘a system of immediate dissemination of information by SMS was put in place and proved to be important.’

Such mobilization may indeed have been important in the final effort. But it is misleading to imply, as some recent studies by Berkman staff have, that the Orange Revolution was the work of as a “smart mob”—a term introduced by the critic Howard Rheingold to describe self–structuring and emerging social organization facilitated by technology. To focus so singularly on the technology is to gloss over the brutal attempts to falsify the results of the presidential elections that triggered the protests, the two weeks that protesters spent standing in the freezing November air, or the millions of dollars pumped into the Ukrainan democratic forces to make those protests happen in the first place. Regime change by text messaging may seem realistic in cyberspace, but no dictators have been toppled via Second Life, and no real elections have been won there either; otherwise, Ron Paul would be president.

To be sure, technology has a role in global causes. In addition to the tools of direct communication and collaboration now available, the proliferation of geospatial data and cheap and accessible satellite imagery, along with the arrival of user–friendly browsers like Google Earth, has fundamentally transformed the work of specialized NGOs; helped to start many new ones; and allowed, for example, real–life tracking of deforestation and illegal logging. Even indigenous populations previously shut off from technological innovations have taken advantage of online tools.

More importantly, the tectonic shifts in the economics of activism have allowed large numbers of unaffiliated individual activists (some of them toiling part–time or even freelancing) to contribute to numerous efforts. As Clay Shirky argues in Here Comes Everybody: Organizing Without Organizations, the new generation of protests is much more ad–hoc, spontaneous, and instantaneous (another allusion to Rheingold’s “smart mobs”). Technology enables groups to capitalize on different levels of engagement among activists. Operating on Wikipedia’s every–comma–counts ethos, it has finally become possible to harvest the energy of both active and passive contributors. Now, even a forwarded email counts. Such “nano–activism” matters in the aggregate.

So the Internet is making group and individual action cheaper, faster, leaner. But logistics are not the only determinant of civic engagement. What is the impact of the Internet on our incentives to act? This question is particularly important in the context of authoritarian states, where elections and opportunities for spontaneous, collective action are rare. The answer depends, to a large extent, on whether the Internet fosters an eagerness to act on newly acquired information. Whether the Internet augments or dampens this eagerness is both critical and undetermined.

Link: Don’t Be a Stranger

"Online venues that encourage strangers to form lasting friendships are dying out."

[…] When someone asks me how I know someone and I say “the Internet,” there is often a subtle pause, as if I had revealed we’d met through a benign but vaguely kinky hobby, like glassblowing class, maybe. The first generation of digital natives are coming of age, but two strangers meeting online is still suspicious (with the exception of dating sites, whose bare utility has blunted most stigma). What’s more, online venues that encourage strangers to form lasting friendships are dying out. Forums and emailing are being replaced by Facebook, which was built on the premise that people would rather carefully populate their online life with just a handful of “real” friends and shut out all the trolls, stalkers, and scammers. Now that distrust of online strangers is embedded in the code of our most popular social network, it is becoming increasingly unlikely for people to interact with anyone online they don’t already know.

Some might be relieved. The online stranger is the great boogeyman of the information age; in the mid-2000s, media reports might have had you believe that MySpace was essentially an easily-searchable catalogue of fresh victims for serial killers, rapists, cyberstalkers, and Tila Tequila. These days, we’re warned of “catfish” con artists who create attractive fake online personae and begin relationships with strangers to satisfy some sociopathic emotional need. The term comes from the documentary Catfish and the new MTV reality show of the same name.

The technopanics over online strangers haunting the early social web were propelled by straight-up fear of unknown technology. Catfish shows that the fear hasn’t vanished with social media’s ubiquity, it’s just become as banal as the technology itself. Each episode follows squirrelly millennial filmmaker Nev Schulman as he introduces someone in real life to a close friend or lover they’ve only known online. Things usually don’t turn out as well as it did for me and Austin, to say the least. In the first episode, peppy Arkansas college student Sunny gushes to Schulman over her longtime Internet boyfriend, a male model and medical student named Jamison. They have never met or even video-chatted, but Sunny knows Jamison is The One.

“The chance of us meeting, and the connection we built is really something—once in a lifetime,” Sunny says. But when Schulman calls Jamison’s phone to get his side of the story it’s answered by someone who sounds like a middle-schooler pretending to be ten years older to buy beer at a gas station. Each detail of Jamison’s biography is more improbable than the last. The only surprise when Sunny and Schulman arrive at Jamison’s house in Alabama and learn that the chiseled male model she fell for is actually a sun-deprived young woman named Chelsea, is how completely remorseless Chelsea is about the whole thing.

But Catfish isn’t a cautionary tale about normal people being victimized by weirdos they meet on the Internet. By lowering the stakes from death or financial ruin to heartbreak, Catfish can blame the victim as well as the perpetrator. The hoaxes are so stupidly obvious from the beginning that it’s impossible to feel empathy for targets like Sunny. Who’s really “worse” in this situation: The lonely woman who pretends, poorly, to be a male model on the Internet, or the one who plows time and energy into such an obvious fraud? Catfish indicts the entire practice of online friendship as a depressing massively multiplayer online game in which the deranged entertain the deluded. Catfish is Jerry Springer for the social media age. Like the sad, bickering subjects of Springer’s show, Sunny and Jamison deserve each other.

Catfish has struck such a nerve because it combines old fears of Internet strangers with newer anxieties about the authenticity of online friendship. Recently, an army of op-ed writers and best-selling authors have argued that social media is degrading our real-life relationships. “Friendship is devolving from a relationship to a feeling,” wrote the cultural critic William Deresiewicz in 2009, “from something people share to something each of us hugs privately to ourselves in the loneliness of our electronic caves.” Catfish‘s excruciating climaxes dramatize this argument. We see what happens when people like Sunny treat online friendships as if they’re “real,” and the end result is not pretty, literally.

Today’s skepticism of online relationships would have dismayed the early theorists of the Internet. For them, the ability to communicate with anyone, anywhere, from the privacy of our “electronic caves” was a boon to human interaction. The computer scientist J.C.R. Licklider breathlessly foretold the Internet in a 1968 paper with Robert W. Taylor, “The Computer as a Communication Device”: He imagined that communication in the future would take place over a network of loosely-linked “online interactive communities.” But he also predicted that “life will be happier for the on-line individual, because those with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity.” The ability to associate online with those we find most stimulating would lead to truer bonds than real world relationships determined by arbitrary variables of proximity and social class.

Obviously, we do not today live in a wired utopia where, as Licklider predicted, “unemployment would disappear from the face of the earth forever,” since everyone would have a job maintaining the massive network. But if Licklider was too seduced by the transformative power of the Internet, today’s social media naysayers are as well. To the Death of Friendship crowd, the Internet is a poison goo that corrodes the bonds of true friendship through Facebook’s trivial status updates and boring pictures of pets and kids. While good at selling books and making compelling reality television, this argument misses the huge variety of experience available online. Keener critics understand that our discontent with Facebook can be traced back to the specific values that inform that site. “Everything in it is reduced to the size of its founder,” Zadie Smith writes of Facebook, “Poking, because that’s what shy boys do to girls they’re scared to talk to. Preoccupied with personal trivia, because Mark Zuckerberg thinks the exchange of personal trivia is what ‘friendship’ is.”

Instead of asking, “is Facebook making us lonely?” and aimlessly pondering Big Issues of narcissism, social disintegration, and happiness metrics, as in a recentAtlantic cover story, we should ask: What exactly is it about Facebook that makes people ask if it’s making us lonely? The answer is in Mark Zuckerberg’s mind; not Mark Zuckerberg the awkward college student, where Zadie Smith finds it, but Mark Zuckerberg the programmer. Everything wrong with Facebook, from its ham-fisted approach to privacy, to the underwhelming quality of Facebook friendship, stems from the fact that Facebook models human relations on what Mark Zuckerberg calls “The social graph.”

“The idea,” he’s said, “is that if you mapped out all the connections between people and the things they care about, it would form a graph that connects everyone together.”

Facebook kills Lidlicker’s dream of fluid “on-line interactive communities” by fixing us on the social graph as surely as our asses rest in our chairs in the real world. The social graph is human relationships modeled according to computer logic. There can be no unknowns on the social graph. In programming, an unknown value is also known as “garbage.” So Facebook requires real names and real identities. “I think anonymity on the Internet has to go away,” explained Randi Zuckerberg, Mark’s sister and Facebook’s former marketing director. No anonymity means no strangers. Catfish wouldn’t happen in Zuckerberg’s ideal Internet, but neither would mine and Austin’s serendipitous friendship. Friendship on Mark Zuckerberg’s Internet is reduced to trading pokes and likes with co-workers or old high school buddies.

“A computer is not really like us,” wrote Ellen Ullman, a decade before the age of social media. “It is a projection of a very small part of ourselves; that portion devoted to logic, order, rule and clarity.” These are not the values associated with a fulfilling friendship.

But what if a social network operated according to a logic as different from computer logic as an underground punk club is from a computer lab? Once upon a time this social network did exist, and it was called Makeoutclub.com. Nobody much talks about Makeoutclub.com these days, because in technology the only things that remain after the latest revolution changes everything all over again is the heroic myth of the champion’s victory (Facebook) and the loser’s cautionary tale (MySpace). Makoutclub didn’t win or lose; it barely played the game.

Makeoutclub was founded in 2000, four years before Facebook, and is sometimes referred to as the world’s first social network. It sprung from a different sort of DIY culture than the feel-good Northwest indie vibes of Urban Honking. Makeoutclub was populated by lonely emo and punk kids, founded by a neck-tattooed entrepreneur named Gibby Miller, out of his bedroom in Boston.

The warnings of social disintegration and virtual imprisonment sounded by today’s social media skeptics would have seemed absurd to the kids of Makeoutclub. They applied for their account and filled out the rudimentary profile in order to expand their identities beyond lonely real lives in disintegrating suburban sprawl and failing factory towns. Makeoutclub was electrified by the simultaneous realization of thousands of weirdos that they weren’t alone.

With Makeoutclub, journalist Andy Greenwald writes in his book Nothing Feels Good: Punk Rock, Teenagers, and Emo,

Kids in one-parking-lot towns had access not only to style (e.g., black, black glasses), but also what books, ideas, trends, and beliefs were worth buzzing about in the big cities. If, in the past, one wondered how the one-stoplight town in Kansas had somehow birthed a true-blue Smiths fan, now subculture was the same everywhere. Outcasts had a secret hideout. Makeoutclub.com was one-stop shopping for self-makers.

As the name would suggest, Makeoutclub was also an excellent place to hook up. But because it wasn’t explicitly a dating service, courtship on Makeoutclub was free of OKCupid’s mechanical numbness. Sex and love were natural fixations for a community of thousands of horny young people, not a programming challenge to be solved with sophisticated algorithms.

About three years before I met my funny friend Austin on Urban Honking in Portland, Austin met his wife on Makeoutclub.com. Austin told me he joined in 2001 when he was 21 years old, “because it was easy to do and increased my chance of meeting a cute girl I could date.” You could search users by location, which made it easy to find someone in your area. (On Facebook, it’s impossible to search for people without being guided to those you are most likely to already know; results are filtered according to the number of mutual friends you have.) Austin would randomly message interesting-seeming local women whenever he came back home from college and they’d go on dates that almost invariably ended in no making out. In the real world, Austin was awkward.

Makeoutclub brought people together with a Lickliderian common interest, but it didn’t produce a Lickliderian utopia. It was messy; crews with names like “Team Vegan and “Team Elitist Fucks” battled on the message board, and creeps haunted profiles. But since anyone could try to be an intriguing stranger, the anonymity bred a productive recklessness. One night, around 2004, Austin was browsing Makeoutclub when he found his future wife. By this time, he’d graduated college and moved to Norway on a fellowship, where he fell into a period of intense loneliness. He’d taken again to messaging random women on Makeoutclub to talk to, and that night he messaged Dana, a Canadian who had caught his eye because she was wearing an eye patch in her profile picture.

“I had recently made a random decision that if I met a girl with a patch over her eye, I would marry her,” Austin told me. “I don’t know why I made this decision, but at the time I was making lots of strange decisions.” He explained this to Dana in his first message to her. They joked over instant messenger for a few days, but after a while their contact trailed off.

Months later, after Austin had moved from Norway to New York City, he received a surprising instant message from Dana. It turned out that Dana had meant to message another friend with a similar screenname to Austin’s. They got to chatting again, and Dana said she’d soon be taking a trip to New York City to see the alt-cabaret group Rasputina play. Dana and Austin met up the night before she was supposed to return to Canada. They got along. Dana slept over at Austin’s apartment that night and missed her flight. When Dana got back to Canada they kept in touch, and within a few weeks, Austin asked her to marry her. Today, they’ve been married for over eight years.

Dana and Austin’s relationship, and mine and Austin’s friendship, shows the Licklider dream was not as naïve as it appears now at first glance. If you look to online communities outside of Facebook, strangers are forging real and complex friendships, despite the complaints of op-ed writers. Even today, I’ve met some of my best friends on Twitter, which is infinitely better at connecting strangers than Facebook. Unlike the almost gothic obsession of Catfish’s online lovers, these friendships aren’t exclusively online—we meet up sometimes to talk about the Internet in real life. They are not carried out in a delusional swoon, or by trivial status updates.

These are not brilliant Wordsworth-and-Coleridge type soul-meldings, but they are not some shadow of a “real” friendship. Internet friendship yields a connection that is selfconsciously pointless and pointed at the same time: Out of all of the millions of bullshitters on the World Wide Web, we somehow found each other, liked each other enough to bullshit together, and built our own Fortress of Bullshit. The majority of my interactions with online friends is perpetuating some injoke so arcane that nobody remembers how it started or what it actually means. Perhaps that proves the op-ed writers’ point, but this has been the pattern of my friendships since long before I first logged onto AOL, and I wouldn’t have it any other way.

Makeoutclub isn’t dead either, but it seems mired in nostalgia for its early days. This past December, Gibby Miller posted a picture he’d taken in 2000 to Makeoutclub’s forums — it was the splash image for its first winter. It’s a snowy picture of his Boston neighborhood twelve years ago, unremarkable except for the moment of time it represents.

“This picture more than any other brings me back to those days,” Miller wrote in the forum. “All ages shows were off the hook, ‘IRL’ meetups were considered totally weird and meeting someone online was unheard of, almost everyone had white belts and dyed black Vulcan cuts.”

At least the Vulcan cuts have gone out of style.

Link: Cyberspace and the Lonely Crowd

In this essay I have tried to elucidate a number of crucial theses from Guy Debord’s The Society of the Spectacle by reexamining them in view of conditions within the growing digital economy. I have also considered what the spectacle is not in the hope of avoiding the kind of oversimplification of Debord’s theory which is all too common.

The whole life of those societies in which modern conditions of production prevail presents itself as an immense accumulation of spectacles. All that once was directly lived has become mere representation. (Guy Debord,The Society of the Spectacle, thesis 1)

Originally published in Paris in 1967 as La Societé du spectacle, Debord’s text, a collection of 221 brief theses organized into nine chapters, is a Marxian aphoristic analysis of the conditions of life in the modern, industrialized world. Here “spectacular society” is arraigned in terms that are simultaneously poetic and precise: deceit, false consciousness, separation, unreality. Debord’s influence today is beyond dispute.

Upon revisiting this book I have been impressed by the immediacy of the theory. For Debord seemed to be describing the most intensively promoted phenomenon of this decade, the planet-wide network of existing and promised digital commodities, services and environments: cyberspace.

Cyberspace is supposed to be about interactivity, connectivity and community. Yet if cyberspace exemplifies the spectacle through the relationships which we will investigate here, it is not about connection at all — paradoxically, it is about separation.

The spectacle appears at once as society itself, as a part of society and as a means of unification. As a part of society, it is that sector where all attention, all consciousness, converges.

That this along with numerous other passages from The Society of the Spectacle seems to describe the imploding virtual world of digital communications is not a coincidence. But note well: the nature of the “unification” in question here is at the heart of Debord’s theory. He continues:

Being isolated — and precisely for that reason — this sector is the locus of illusion and false consciousness; the unity it imposes is merely the official language of generalized separation. (Thesis 3)

As we will see, within the spectacle, as within the regime of technology, cultural differences are made invisible and qualitative distinctions between data, information, knowledge and experience are lost or blurred beyond recognition. Our minds are separated from our bodies; in turn we are separated from each other, and from the non-technological world.

The Transformation of Knowledge

…the spectacle is by no means the outcome of a technical development perceived as natural; on the contrary, the society of the spectacle is a form that chooses its own technical content. (Thesis 24)

What can it mean to say the spectacle chooses its own content? The words of Jean François Lyotard offer some explanation. Lyotard is concerned with the transformation of knowledge through the changing operations of language, including the rise of computer languages. In his book The Postmodern Condition, he discusses ways in which the proliferation of information-processing machines will profoundly affect the circulation of learning. He writes:

The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language.

Editing, in any medium, has always been a valorizing process with aesthetic as well as practical costs and benefits. But this is something different. There is an inevitable and incalculable loss of context and connotation involved in getting objects “into the computer,” not to mention the purely technical thresholds of information density (resolution, throughput, bandwidth, etc.).

Contrary to our technocratic wishful thinking, there are much deeper problems here than technical ones. For when all information is to be digitized, that which is not digitized will cease to have value, and that which is “on-line” will acquire a significance out of all proportion to its real meaning.

The spectacle manifests itself as an enormous positivity, out of reach and beyond dispute. All it says is: “Everything that appears is good; whatever is good will appear.” (Thesis 12)

Of course a transformation of this magnitude is not unprecedented, as we know from the examination of typographic and printing technology in Marshall McLuhan’s The Gutenberg Galaxy. Neither is it going unnoticed. Lest we forget that we are in the midst of a “revolution,” we are reminded of it daily by a thousand advertisers. But who can say what kind of distortion is taking place when all qualitative relationships are miraculously transformed into quantitative ones?

The Transformation of Ourselves

Though separated from his product, man is more and more, and ever more powerfully, the producer of every detail of his world. The closer life comes to being his own creation, the more drastically he is cut off from that life. (Thesis 33)

What is our role in this epistemological shift? Why are we allowing it, and how are we changed by it? For Jerry Mander, the extended use of technology involves an inevitable adaptation:

Humans who use cars sit in fixed positions for long hours following a strip of gray pavement, with eyes fixed forward, engaged in the task of driving. As long as they are driving, they are living within what we might call “roadform.” McLuhan told us that cars “extended” the human feet, but he put it the wrong way. Cars replaced human feet.

Following this logic, Allucquere Rosanne Stone has written a number of enthusiastic but cautionary investigations of “prosthetic” communications technology and its positive potential to decouple the gendered subject and the physical body. To find the most incisive answers, however, we must return to McLuhanand the myth of Narcissus. The word Narcissus, McLuhan tells us, comes from the Greek word narcosis, or numbness:

The youth Narcissus mistook his own reflection in the water for another person. This extension of himself by mirror numbed his perceptions until he became a servomechanism of his own extended or repeated image. The nymph Echo tried to win his love, but in vain. He was numb. He had adapted to his extension of himself and had become a closed system.

For the solipsist, there is no problem here: in this view, one cannot know anything other than the contents of one’s own mind or consciousness — the mind is always a closed system.

When we are enthralled in any immersive virtual environment, the body seems to become mere baggage (or “meat”). Any synthetic illusion which is sufficiently well resolved to convince or even confuse the senses can capture our undivided attention. So why should we not try to pack up and move in? If perception is constructed, then there is no reason to privilege the “real” — there is no “real” at all.

Suppose we allow that reality is not “an inherent property of the external world,” but instead is “largely an internally generated construct of the nervous system”. All the more reason, then, to recognize the principal operative condition of every synthetic environment: sensory deprivation. The relative poverty of any artificially generated experience seems quite evident when compared to a day spent in the country, our attention cast toward the infinity of events surrounding us.

It is the desire for immortality and for control, the kind of control and self-empowerment which we are denied in everyday life, which drives us. Virtual reality is not an antidote to the anaesthetizing built environment. It is simply a different formulation of the same drug.

The Promise of Total Connection

The spectacle … makes no secret of what it is, namely, hierarchical power evolving on its own, in its separateness, thanks to an increasing productivity based on an ever more refined division of labor, an ever greater comminution of machine-governed gestures, and an ever widening market. In the course of this development all community and critical awareness have ceased to be…. (Thesis 25)

Cybernetics, the transdisciplinary subject which gives its name to cyberspace, originated in the 1940s as the science of control and communication in the animal and the machine. It thus concerns itself with the flow of messages, and the problem of controlling this flow to ensure the proper functioning of some complex system, be it organic or artificial. So what happens when the system in question is a social system?

"Virtual community" is the latest in a series of oxymoronic expressions used to articulate the indispensibility of computers, which will allegedly unleash the forces to reconstitute mass society as the "public" once again. Of course the promise of a fully wired planet is not new, and we are all familiar with the basic connotations of McLuhan’s "global village." What is new is the feverish pitch of these claims that computers will return us to an ideal form of participatory democracy, a new "Athens without slaves."

Not everyone shares this New Age optimism. There are some dissenting voices even among the digerati (as the digital intelligensia are known). According to Larry Keeley, at a recent TED conference (Technology, Entertainment and Design), a number of attendees:

… disagreed that the Internet is, or ever could be, a true community. [Author Daniel] Boorstin observed that seeking brings us together and finding separates us. The Internet, which makes finding very easy, substitutes commonality of interests for shared long term goals.

Clearly the race to become wired is fueled by some anxiety. Just how far will it take us?

The Growth of the System

There can be no freedom apart from activity, and within the spectacle all activity is banned — a corollary of the fact that all real activity has been forcibly channeled into the global construction of the spectacle. So what is referred to as “liberation from work,” that is, increased leisure time, is a liberation neither within labor itself nor from the world labor has brought into being. (Thesis 27)

Why is the Internet, currently said to incorporate millions of computers and tens of millions of users, growing at a rate of 20% per month?

Knowledge is power, as the saying goes, and the concept of a 500 channel infobahn is has triggered the goldrush of the information age. There is a lot of liberal rhetoric about the need to circumvent a system of information haves and have-nots. Yet the Western economies are charging ahead surrounded by chronic workaholism and chronic unemployment — two sides of the same postindustrial coin.

The Society of the Spectacle is Not About Images

The spectacle cannot be understood either as a deliberate distortion of the visual world or as a product of the technology of the mass dissemination of images. It is far better viewed as a weltanschauung that has been actualized, translated into the material realm — a world view transformed into an objective force. (Thesis 5)

Until recently the Internet was largely a world of text; one writer called it the place where people “do the low ASCII dance.” (Low ASCII refers to the basic character set on American keyboards: upper and lower case letters, numbers, basic punctuation.) Yet there is no doubt that even now, in ever-increasing proportions, the Internet and virtually all other manifestations of cyberspace is carrying more than raw text. Images, sounds, compressed animations, entire radio shows and video sequences are already available over the net as digitized files. By definition cyber-space will come to represent data through spatial forms rather than purely alphanumeric ones.

However, this evolution is not pertinent to my argument. Neither am I claiming that the increasing commercialization of the net is the real threat, though this is as inevitable as it is regrettable. I suggest that the central issue is the problem of representation — in particular, computer-mediated communication — not the presence or absence of visual images.

More precisely, it has to do with reification.

… the spectacle’s job is to cause a world that is no longer directly perceptible to be seen via different specialized mediations…. It is the opposite of dialogue. Wherever representation takes on independent existence, the spectacle reestablishes its rule. (Thesis 18)

It’s About Capital, Stupid

The spectacle is not a collection of images; rather it is a social relationship between people that is mediated by images (Thesis 4)

The Society of the Spectacle is not about images. It’s about the manufacture of lack and the manipulation of desire. It’s about separation and isolation.

The telephone is a piece of technology that almost no one chooses to do without. It facilitates “communication.” Consider the phone sex advertisements in any major metropolitan centre. What are all these buyers and sellers looking for? In whose interest is this circulation of desire, labour and credit being orchestrated?

Isolation underpins technology, and technology isolates in its turn; all goods proposed by the spectacular system, from cars to televisions, also serve as the weapons for that system as it strives to reinforce the isolation of “the lonely crowd.” (Thesis 28)