Sunshine Recorder

Link: "We Need to Talk About TED"

This is my rant against TED, placebo politics, “innovation,” middlebrow megachurch infotainment, etc., given atTEDx San Diego at their invitation (thank you to Jack Abbott and Felena Hanson). It’s very difficult to do anything interesting within the format, and even this seems like far too much of a ‘TED talk’, especially to me. In California R&D World, TED (and TED-ism) is unfortunately a key forum for how people communicate with one another. It’s weird, inadequate and symptomatic, to be sure, but it is one of ‘our’ key public squares, however degraded and captured. Obviously any sane intellectual wouldn’t go near it. Perhaps that’s why I was (am) curious about what (if any) reverberation my very minor heresy might have: probably nothing, and at worse an alibi and vaccine for TED to warn off the malaise that stalks them? We’ll have to see. The text of the talk is below, and was also published as an Op-Ed by The Guardian

In our culture, talking about the future is sometimes a polite way of saying things about the present that would otherwise be rude or risky.

But have you ever wondered why so little of the future promised in TED talks actually happens? So much potential and enthusiasm, and so little actual change. Are the ideas wrong? Or is the idea about what ideas can do all by themselves wrong?

I write about entanglements of technology and culture, how technologies enable the making of certain worlds, and at the same time how culture structures how those technologies will evolve, this way or that. It’s where philosophy and design intersect.

So the conceptualization of possibilities is something that I take very seriously. That’s why I, and many people, think it’s way passed time to take a step back and ask some serious questions about the intellectual viability of things like TED.

So my TED talk is not about my work or my new book—the usual spiel—but about TED itself, what it is and why it doesn’t work.

The first reason is over-simplification.

To be clear, I think that having smart people who do very smart things explain what they doing in a way that everyone can understand is a good thing. But TED goes way beyond that.

Let me tell you a story. I was at a presentation that a friend, an Astrophysicist, gave to a potential donor. I thought the presentation was lucid and compelling (and I’m a Professor of Visual Arts here at UC San Diego so at the end of the day, I know really nothing about Astrophysics). After the talk the sponsor said to him, “you know what, I’m gonna pass because I just don’t feel inspired… you should be more like Malcolm Gladwell.”

At this point I kind of lost it. Can you imagine?

Think about it: an actual scientist who produces actual knowledge should be more like a journalist who recycles fake insights! This is beyond popularization. This is taking something with value and substance  and coring it out so that it can be swallowed without chewing. This is not the solution to our most frightening problems—rather this is one of our most frightening problems.

So I ask the question: does TED epitomize a situation in which a scientist (or an artist or philosopher or activist or whoever) is told that their work is not worthy of support, because the public doesn’t feel good listening to them?

I submit that Astrophysics run on the model of American Idol is a recipe for civilizational disaster.

What is TED?

So what is TED exactly?

Perhaps it’s the proposition that if we talk about world-changing ideas enough, then the world will change.  But this is not true, and that’s the second problem.

TED of course stands for Technology, Entertainment, Design, and I’ll talk a bit about all three. I Think TED actually stands for: middlebrow megachurch infotainment

The key rhetorical device for TED talks is a combination of epiphany and personal testimony (an “epiphimony” if you like ) through which the speaker shares a personal journey of insight and realization, its triumphs and tribulations.

What is it that the TED audience hopes to get from this? A vicarious insight, a fleeting moment of wonder, an inkling that maybe it’s all going to work out after all? A spiritual buzz?

I’m sorry but this fails to meet the challenges that we are supposedly here to confront. These are  complicated and difficult and are not given to tidy just-so solutions. They don’t care about anyone’s experience of optimism. Given the stakes, making our best and brightest waste their time –and the audience’s time— dancing like infomercial hosts is too high a price. It is cynical.

Also, it just doesn’t work.

Recently there was a bit of a dust up when TED Global sent out a note to TEDx organizers asking them not to not book speakers whose work spans the paranormal, the conspiratorial, New Age “quantum neuroenergy,” etc: what is called Woo. Instead of these placebos, TEDx should instead curate talks that are imaginative but grounded in reality.  In fairness, they took some heat, so their gesture should be acknowledged. A lot of people take TED very seriously, and might lend credence to specious ideas if stamped with TED credentials. “No” to placebo science and medicine.

But…the corollaries of placebo science and placebo medicine are placebo politics and placebo innovation. On this point, TED has a long ways to go.

Perhaps the pinnacle of placebo politics and innovation was featured at TEDx San Diego in 2011. You’re familiar I assume with Kony2012, the social media campaign to stop war crimes in central Africa? So what happened here? Evangelical surfer Bro goes to help kids in Africa. He makes a campy video explaining genocide to the cast of Glee. The world finds his public epiphany to be shallow to the point of self-delusion. The complex geopolitics of Central Africa are left undisturbed. Kony’s still there. The end.

You see, when inspiration becomes manipulation, inspiration becomes obfuscation. If you are not cynical you should be skeptical. You should be as skeptical of placebo politics as you are placebo medicine.

T and Technology

T - E - D. I’ll go through them each quickly.

So first Technology…

We hear that not only is change accelerating but that the pace of change is accelerating as well.

While this is true of computational carrying-capacity at a planetary level, at the same time—and in fact the two are connected—we are also in a moment of cultural de-acceleration.

We invest our energy in futuristic information technologies, including our cars, but drive them home to kitsch architecture copied from the 18th century. The future on offer is one in which everything changes, so long as everything stays the same. We’ll have Google Glass, but still also business casual.

This timidity is our path to the future? No, this is incredibly conservative, and there is no reason to think that more Gigaflops will inoculate us.

Because, if a problem is in fact endemic to a system, then the exponential effects of Moore’s Law also serve to amplify what’s broken. It is more computation along the wrong curve, and I don’t think this is necessarily a triumph of reason.

Part of my work explores deep technocultural shifts, from post-humanism to the post-anthropocene, but TED’s version has too much faith in technology, and not nearly enough commitment to technology. It is placebo technoradicalism, toying with risk so as to re-affirm the comfortable.

So our machines get smarter and we get stupider. But it doesn’t have to be like that. Both can be much more intelligent. Another futurism is possible.

E and Economics

A better ‘E’ in TED would stand for Economics, and the need for, yes imagining and designing, different systems of valuation, exchange, accounting of transaction externalities, financing of coordinated planning, etc. Because States plus Markets, States versus Markets, these are insufficient models, and our conversation is stuck in Cold War gear.

Worse is when economics is debated like metaphysics, as if the reality of a system is merely a bad example of the ideal.

Communism in theory is an egalitarian utopia.

Actually existing Communism meant ecological devastation, government spying, crappy cars and gulags.

Capitalism in theory is rocket ships, nanomedicine, and Bono saving Africa.

Actually existing Capitalism means Walmart jobs, McMansions, people living in the sewers under Las Vegas, Ryan Seacrest…plus —ecological devastation, government spying, crappy public transportation and for-profit prisons.

Our options for change range from basically what we have plus a little more Hayek, to what we have plus a little more Keynes. Why?

The most  recent centuries have seen extraordinary accomplishments in improving quality of life. The paradox is that the system we have now —whatever you want to call it— is in the short term what makes the amazing new technologies possible, but in the long run it is also what suppresses their full flowering.  Another economic architecture is prerequisite.

D and Design

Instead of our designers prototyping the same “change agent for good” projects over and over again, and then wondering why they don’t get implemented at scale, perhaps we should resolve that design is not some magic answer. Design matters a lot, but for very different reasons.  It’s easy to get enthusiastic about design because, like talking about the future, it is more polite than referring to white elephants in the room..

Such as…

Phones, drones and genomes, that’s what we do here in San Diego and La Jolla. In addition to the other  insanely great things these technologies do, they are the basis of NSA spying, flying robots killing people, and the wholesale privatization of  biological life itself. That’s also what we do.

The potential for these technologies are both wonderful and horrifying at the same time, and to make them serve good futures, design as “innovation” just isn’t a strong enough idea by itself. We need to talk more about design as “immunization,” actively preventing certain potential “innovations” that we do not want from happening.

And so…

As for one simple take away… I don’t have one simple take away, one magic idea. That’s kind of the point. I will say that if and when the key problems facing our species were to be solved, then perhaps many of us in this room would be out of work (and perhaps in jail).

But it’s not as though there is a shortage of topics for serious discussion. We need a deeper conversation about the difference between digital cosmopolitanism and Cloud Feudalism (and toward that, a queer history of computer science and Alan Turing’s birthday as holiday!)

I would like new maps of the world, ones not based on settler colonialism, legacy genomes and bronze age myths, but instead on something more… scalable.

TED today is not that.

Problems are not “puzzles” to be solved. That metaphor assumes that all the necessary pieces are already on the table, they just need to be re-arranged and re-programmed. It’s not true.

“Innovation” defined as moving the pieces around and adding more processing power is not some Big Idea that will disrupt a broken status quo: that precisely is the broken status quo.

One TED speaker said recently, “If you remove this boundary, …the only boundary left is our imagination.” Wrong.

If we really want transformation, we have to slog through the hard stuff (history, economics, philosophy, art, ambiguities, contradictions).  Bracketing it off to the side to focus just on technology, or just on innovation, actually prevents transformation.

Instead of dumbing-down the future, we need to raise the level of general understanding to the level of complexity of the systems in which we are embedded and which are embedded in us. This is not about “personal stories of inspiration,” it’s about the difficult and uncertain work of de-mystification and re-conceptualization: the hard stuff that really changes how we think. More Copernicus, less Tony Robbins.

At a societal level, the bottom line is if we invest things that make us feel good but which don’t work, and don’t invest things that don’t make us feel good but which may solve problems, then our fate is that it will just get harder to feel good about not solving problems.

In this case the placebo is worse than ineffective, it’s harmful. It’s diverts your interest, enthusiasm and outrage until it’s absorbed into this black hole of affectation.

Keep calm and carry on “innovating”… is that the real message of TED? To me that’s not inspirational, it’s cynical.

In the U.S. the right-wing has certain media channels that allow it to bracket reality… other constituencies have TED.  

Link: What's Wrong with the Modern World

While we are busy tweeting, texting and spending, the world is drifting towards disaster, believes Jonathan Franzen, whose despair at our insatiable technoconsumerism echoes the apocalyptic essays of the satirist Karl Kraus – ‘the Great Hater.’

Karl Kraus was an Austrian satirist and a central figure in fin-de-siecle Vienna’s famously rich life of the mind. From 1899 until his death in 1936, he edited and published the influential magazine Die Fackel (The Torch); from 1911 onward, he was also the magazine’s sole author. Although Kraus would probably have hated blogs, Die Fackel was like a blog that everybody who mattered in the German-speaking world, from Freud to Kafka to Walter Benjamin, found it necessary to read and have an attitude toward. Kraus was especially well known for his aphorisms – for example, “Psychoanalysis is that disease of the mind for which it believes itself to be the cure” – and at the height of his popularity he drew thousands to his public readings.

The thing about Kraus is that he’s is very hard to follow on a first reading – deliberately hard. He was the scourge of throwaway journalism, and to his cult-like followers his dense and intricately coded style formed an agreeable barrier to entry; it kept the uninitiated out. Kraus himself remarked of the playwright Hermann Bahr, before attacking him: “If he understands one sentence of the essay, I’ll retract the entire thing.” If you read Kraus’s sentences more than once, you’ll find that they have a lot to say to us in our own media-saturated, technology-crazed, apocalypse-haunted historical moment.

Here, for example, is the first paragraph of his essay “Heine and the Consequences”.

"Two strains of intellectual vulgarity: defenselessness against content and defenselessness against form. The one experiences only the material side of art. It is of German origin. The other experiences even the rawest of materials artistically. It is of Romance origin. [Romance meaning Romance-language — French or Italian.] To the one, art is an instrument; to the other, life is an ornament. In which hell would the artist prefer to fry? He’d surely still rather live among the Germans. For although they’ve strapped art into the Procrustean Folding Bed of their commerce, they’ve also made life sober, and this is a blessing: fantasy thrives, and every man can put his own light in the barren windowframes. Just spare me the pretty ribbons! Spare me this good taste that over there and down there delights the eye and irritates the imagination. Spare me this melody of life that disturbs my own music, which comes into its own only in the roaring of the German workday. Spare me this universal higher level of refinement from which it’s so easy to observe that the newspaper seller in Paris has more charm than the Prussian publisher."

First footnote: Kraus’s suspicion of the “melody of life” in France and Italy still has merit. His contention here – that walking down a street in Paris or Rome is an aesthetic experience in itself – is confirmed by the ongoing popularity of France and Italy as vacation destinations and by the “envy me” tone of American Francophiles and Italophiles announcing their travel plans. If you say you’re taking a trip to Germany, you’d better be able to explain what specifically you’re planning to do there, or else people will wonder why you’re not going someplace where life is beautiful. Even now, Germany insists on content over form. If the concept of coolness had existed in Kraus’s time, he might have said that Germany is uncool.

This suggests a more contemporary version of Kraus’s dichotomy: Mac versus PC. Isn’t the essence of the Apple product that you achieve coolness simply by virtue of owning it? It doesn’t even matter what you’re creating on your Mac Air. Simply using a Mac Air, experiencing the elegant design of its hardware and software, is a pleasure in itself, like walking down a street in Paris. Whereas, when you’re working on some clunky, utilitarian PC, the only thing to enjoy is the quality of your work itself. As Kraus says of Germanic life, the PC “sobers” what you’re doing; it allows you to see it unadorned. This was especially true in the years of DOS operating systems and early Windows.

One of the developments that Kraus will decry in this essay – the Viennese dolling-up of German language and culture with decorative elements imported from Romance language and culture – has a correlative in more recent editions of Windows, which borrow ever more features from Apple but still can’t conceal their essential uncool Windowsness. Worse yet, in chasing after Apple elegance, they betray the old austere beauty of PC functionality. They still don’t work as well as Macs do, and they’re ugly by both cool and utilitarian standards.

And yet, to echo Kraus, I’d still rather live among PCs. Any chance that I might have switched to Apple was negated by the famous and long-running series of Apple ads aimed at persuading people like me to switch. The argument was eminently reasonable, but it was delivered by a personified Mac (played by the actor Justin Long) of such insufferable smugness that he made the miseries of Windows attractive by comparison. You wouldn’t want to read a novel about the Mac: what would there be to say except that everything is groovy? Characters in novels need to have actual desires; and the character in the Apple ads who had desires was the PC, played by John Hodgman. His attempts to defend himself and to pass himself off as cool were funny, and he suffered, like a human being. (There were local versions of the ad around the world, with comedians David Mitchell and Robert Webb as the PC and Mac in the UK).

I’d be remiss if I didn’t add that the concept of “cool” has been so fully co-opted by the tech industries that some adjacent word such as “hip” is needed to describe those online voices who proceeded to hate on Long and deem Hodgman to be the cool one. The restlessness of who or what is considered hip nowadays may be an artifact of what Marx famously identified as the “restless” nature of capitalism. One of the worst things about the internet is that it tempts everyone to be a sophisticate – to take positions on what is hip and to consider, under pain of being considered unhip, the positions that everyone else is taking. Kraus may not have cared about hipness per se, but he certainly revelled in taking positions and was keenly attuned to the positions of others. He was a sophisticate, and this is one reason Die Fackel has a bloglike feel. Kraus spent a lot of time reading stuff he hated, so as to be able to hate it with authority.

"Believe me, you color-happy people, in cultures where every blockhead has individuality, individuality becomes a thing for blockheads."

Second footnote: You’re not allowed to say things like this in America nowadays, no matter how much the billion (or is it 2 billion now?) “individualised” Facebook pages may make you want to say them. Kraus was known, in his day, to his many enemies, as the Great Hater. By most accounts, he was a tender and generous man in his private life, with many loyal friends. But once he starts winding the stem of his polemical rhetoric, it carries him into extremely harsh registers.

The individualised “blockheads” that Kraus has in mind here aren’t hoi polloi. Although Kraus could sound like an elitist, he wasn’t in the business of denigrating the masses or lowbrow culture; the calculated difficulty of his writing wasn’t a barricade against the barbarians. It was aimed, instead, at bright and well-educated cultural authorities who embraced a phony kind of individuality – people Kraus believed ought to have known better.

It’s not clear that Kraus’s shrill, ex cathedra denunciations were the most effective way to change hearts and minds. But I confess to feeling some version of his disappointment when a novelist who I believe ought to have known better, Salman Rushdie, succumbs to Twitter. Or when a politically committed print magazine that I respect, N+1, denigrates print magazines as terminally “male,” celebrates the internet as “female,” and somehow neglects to consider the internet’s accelerating pauperisation of freelance writers. Or when good lefty professors who once resisted alienation – who criticised capitalism for its restless assault on every tradition and every community that gets in its way – start calling the corporatised internet “revolutionary.”

"Spare me the picturesque moil on the rind of an old gorgonzola in place of the dependable white monotony of cream cheese! Life is hard to digest both here and there. But the Romance diet beautifies the spoilage; you swallow the bait and go belly up. The German regimen spoils beauty and puts us to the test: how do we recreate it? Romance culture makes everyman a poet. Art’s a piece of cake there. And Heaven a hell."

Submerged in this paragraph is the implication that Kraus’s Vienna was an in-between case – like Windows Vista. Its language and orientation were German, but it was the co-capital of a Roman Catholic empire reaching far into southern Europe, and it was in love with its own notion of its special, charming Viennese spirit and lifestyle. (“The streets of Vienna are paved with culture,” goes one of Kraus’s aphorisms. “The streets of other cities with asphalt.”) To Kraus, the supposed cultural charm of Vienna amounted to a tissue of hypocrisies stretched over soon-to-be-catastrophic contradictions, which he was bent on unmasking with his satire. The paragraph may come down harder on Latin culture than on German, but Kraus was actually fond of vacationing in Italy and had some of his most romantic experiences there. For him, the place with the really dangerous disconnect between content  and form was Austria, which was rapidly modernising while retaining early-19th-century political and social models. Kraus was obsessed with the role of modern newspapers in papering over the contradictions. Like the Hearst papers in America, the bourgeois Viennese press had immense political and financial influence, and was demonstrably corrupt. It profited greatly from the first world war and was instrumental in sustaining charming Viennese myths like the “hero’s death” through years of mechanised slaughter. The Great War was precisely the Austrian apocalypse that Kraus had been prophesying, and he relentlessly satirised the press’s complicity in it.

Vienna in 1910 was, thus, a special case. And yet you could argue that America in 2013 is a similarly special case: another weakened empire telling itself stories of its exceptionalism while it drifts towards apocalypse of some sort, fiscal or epidemiological, climatic-environmental or thermonuclear. Our far left may hate religion and think we coddle Israel, our far right may hate illegal immigrants and think we coddle black people, and nobody may know how the economy is supposed to work now that markets have gone global, but the actual substance of our daily lives is total distraction. We can’t face the real problems; we spent a trillion dollars not really solving a problem in Iraq that wasn’t really a problem; we can’t even agree on how to keep healthcare costs from devouring the GNP. What we can all agree to do instead is to deliver ourselves to the cool new media and technologies, to Steve Jobs and Mark Zuckerberg and Jeff Bezos, and to let them profit at our expense. Our situation looks quite a bit like Vienna’s in 1910, except that newspaper technology has been replaced by digital technology and Viennese charm by American coolness.

Consider the first paragraph of a second Kraus essay, “Nestroy and Posterity”. The essay is ostensibly a celebration of Johann Nestroy, a leading figure in the Golden Age of Viennese theatre, in the first half of the 19th century. By the time Kraus published it, in 1912, Nestroy was underrated, misread and substantially forgotten, and Kraus takes this to be a symptom of what’s wrong with modernity. In his essay “Apocalypse”, a few years earlier, he’d written: “Culture can’t catch its breath, and in the end a dead humanity lies next to its works, whose invention cost us so much of our intellect that we had none left to put them to use. We were complicated enough to build machines and too primitive to make them serve us.” To me the most impressive thing about Kraus as a thinker may be how early and clearly he recognised the divergence of technological progress from moral and spiritual progress. A succeeding century of the former, involving scientific advances that would have seemed miraculous not long ago, has resulted in high-resolution smartphone videos of dudes dropping Mentos into litre bottles of Diet Pepsi and shouting “Whoa!” Technovisionaries of the 1990s promised that the internet would usher in a new world of peace, love, and understanding, and Twitter executives are still banging the utopianist drum, claiming foundational credit for the Arab spring. To listen to them, you’d think it was inconceivable that eastern Europe could liberate itself from the Soviets without the benefit of cellphones, or that a bunch of Americans revolted against the British and produced the US constitution without 4G capability.

"Nestroy and Posterity" begins:

"We cannot celebrate his memory the way a posterity ought to, by acknowledging a debt we’re called upon to honor, and so we want to celebrate his memory by confessing to a bankruptcy that dishonors us, we inhabitants of a time that has lost the capacity to be a posterity… How could the eternal Builder fail to learn from the experiences of this century? For as long as there have been geniuses, they’ve been placed into a time like temporary tenants, while the plaster was still drying; they moved out and left things cozier for humanity. For as long as there have been engineers, however, the house has been getting less habitable. God have mercy on the development! Better that He not allow artists to be born than with the consolation that this future of ours will be better for their having lived before us. This world! Let it just try to feel like a posterity, and, at the insinuation that it owes its progress to a detour of the Mind, it will give out a laugh that seems to say: More Dentists Prefer Pepsodent. A laugh based on an idea of Roosevelt’s and orchestrated by Bernard Shaw. It’s the laugh that’s done with everything and can do whatever. For the technicians have burned the bridges, and the future is: whatever follows automatically."

Nowadays, the refrain is that “there’s no stopping our powerful new technologies”. Grassroots resistance to these technologies is almost entirely confined to health and safety issues, and meanwhile various logics – of war theory, of technology, of the marketplace – keep unfolding automatically. We find ourselves living in a world with hydrogen bombs because uranium bombs just weren’t going to get the job done; we find ourselves spending most of our waking hours texting and emailing and Tweeting and posting on colour-screen gadgets because Moore’s law said we could. We’re told that, to remain competitive economically, we need to forget about the humanities and teach our children “passion” for digital technology and prepare them to spend their entire lives incessantly re-educating themselves to keep up with it. The logic says that if we want things like Zappos.com or home DVR capability – and who wouldn’t want them? – we need to say goodbye to job stability and hello to a lifetime of anxiety. We need to become as restless as capitalism itself.

Link: Understanding Organizational Stupidity

Is it morning in America again, or is the bubble that is the American economy about to pop (again), this time perhaps tipping it into full-blown collapse in five stages with symphonic accompaniment and fireworks? A country blowing itself up is quite a sight to behold, and it makes us wonder about lots of things. For instance, it makes us wonder whether the people who are doing the blowing up happen to be criminals. (Sure, they may be in a manner of speaking—as a moral judgment passed on the powerful by the powerless—but since none of them are likely to see the inside of a jail cell or even a courtroom any time soon, the point is moot. Let’s be sure to hunt them down once they try to run and hide, though.) But at a much more basic and fundamental level, a better question to ask is this one:

“Why are we being so fucking stupid?”

What do I mean when I use the term “fucking stupid”? I do not mean it as a term of abuse but as a precise, if unflattering, diagnosis. Here is as good a definition as any, excerpted from American Eulogyby Jim Quinn:

If you had told someone on September 10, 2001 that ten years later America would be running $1.5 trillion annual deficits, fighting two wars of choice in countries that despise our presence, and had not only not addressed the $100 [trillion] of unfunded welfare liabilities but added billions more with Medicare D and Obamacare, they would have thought you were a crazy doomster predicting the end of the world. They would have put you away in a padded cell if you had further predicted that politicians would cut taxes three separate times, that the Wall Street banks that leveraged themselves 40 to 1 and destroyed the financial system [would be] handed $2 trillion of taxpayer funds so they could pay themselves multi-million dollar bonuses, and that the Federal Reserve would triple its balance sheet to $2.45 trillion by running its printing presses at hyper-speed and handing the money to those same Wall Street Mega-Banks.

Well, the evidence is in, and that crazy doomster in his padded cell has turned out to be amazingly prescient, so perhaps we should listen to him. And what would that crazy doomster have to say now? I would venture to guess that it would be something along these lines:

There is no reason to think that those who failed to take corrective action up until now, but remain in control, will ever do so. But it should be perfectly obvious that this situation cannot continue ad infinitum. And, as a matter of general principle, things that can’t go on forever—don’t.

Back to the question of stupidity: Why are we (as a country) being so fucking stupid? This question has puzzled me for some time. It appears that the problem of stupidity is quite pervasive: look at any large human organization, and you will find that it is ruled by stupidity. I was not the first to stumble across the conjecture that the intelligence of a hierarchically organized group of people is inversely proportional to its size, but so far the mechanism that makes it so has eluded me. Clearly, there is something amiss with hierarchically organized groups, something that causes all of them to eventually collapse, but what exactly is it? To try to get at this question, last year I spent quite a while researching anarchy, and wrote a series of articles on it (Part I, Part II, Part III). I discovered that vast hierarchies do not occur in nature, which is anarchic and self-organizing, with no chains of command and no entities in supreme command. I discovered that anarchic organizations can go on forever while hierarchical ones inevitably end in collapse. I examined some of the recent breakthroughs in complexity theory, which uncovered the laws governing the different scaling factors in natural (anarchically organized, efficient, stable) systems and unnatural (hierarchically organized, inefficient, collapse-prone) ones.

But nowhere did I find a principled, rigorous explanation for the fatal flaw embedded in the very nature of hierarchical systems. I did have a very strong hunch, though, backed by much anecdotal evidence, that it comes down tostupidity. In anarchic societies whose members cooperate freely, intelligence is additive; in hierarchical organizations structured around a chain of command, intelligence is subtractive. The lowest grunts or peons are expected to carry out orders unquestioningly. Their critical faculties are 100% impaired; if not, they are subjected to disciplinary action. The supreme chief executive officer may be of moderately impaired intelligence, since it is indicative of a significant character flaw to want such a job in the first place. (Kurt Vonnegut put it best: “Only nut cases want to be president.”) But beyond that, the supreme leader must act in such a way as to keep the grunts and peons in line, resulting in further intellectual impairment, which is compounded across all of the intervening ranks, with each link in the chain of command contributing a bit of its own stupidity to the organizational stupidity stack.

I never ascended the ranks of middle management, probably due to my tendency to speak out at meetings and throw around terms such as “nonsensical,” “idiotic,” “brainless,” “self-defeating” and “fucking stupid.” If shushed up by superiors, I would resort to cracking jokes, which were funny and even harder to ignore. Neither my critical faculties, nor my sense of humor, are easily repressed. I was thrown at a lot of special projects where the upside of being able to think independently was not negated by the downside of being unwilling to follow (stupid) orders. To me hierarchy = stupidity in an apparent, palpable way. But in explaining to others why this must be so, I had so far been unable to go beyond speaking in generalities and telling stories.

And so I was happy when I recently came across an article which goes beyond such “hand-waving analysis” and answers this question with some precision. Mats Alvesson and André Spicer, writing in Journal of Management Studies (49:7 November 2012, for reprints please contact andre.spicer.1@city.ac.uk) present “A Stupidity-Based Theory of Organizations” in which they define a key term: functional stupidity. It is functional in that it is required in order for hierarchically structured organizations to avoid disintegration or, at the very least, to function without a great deal of internal friction. It is stupid in that it is a form intellectual impairment: “Functional stupidity refers to an absence of reflexivity, a refusal to use intellectual capacities in other than myopic ways, and avoidance of justifications.” Alvesson and Spicer go on to define the various “…forms of stupidity management that repress or marginalize doubt and block communicative action” and to diagram the information flows which are instrumental to generating and maintaining sufficient levels stupidity within organizations. What follows is my summary of their theory. Before I start, I would like to mention that although the authors’ analysis is limited in scope to corporate entities, I believe that it extends quite naturally to other hierarchically organized bureaucratic systems, such as governments.

Alvesson and Spicer use as their jumping-off point the major leitmotif of contemporary management theory, which is that “smartness,” variously defined as “knowledge, information, competence, wisdom, resources, capabilities, talent, and learning” has emerged as the main business asset and the key to competitiveness—a shift seen as inevitable as industrial economies go from being resource-based to being knowledge-based. By the way, this is a questionable assumption; do you know how many millions of tons of hydrocarbons went into making the smartphone? But this leitmotif is pervasive, and exemplified by management guru quips such as “creativity creates its own prerogative.” The authors point out that there is also a vast body of research on the irrationality of organizations and the limits to organizational intelligence stemming from “unconscious elements, group-think, and rigid adherence to wishful thinking.” There is also no shortage of research into organizational ignorance which explores the mechanisms behind “bounded-rationality, skilled incompetence, garbage-can decision making, foolishness, mindlessness, and (denied) ignorance.” But what they are getting at is qualitatively different from such run-of-the-mill stupidity. Functional stupidity is neither delusional nor irrational nor ignorant: organizations restrict smartness in rational and informed ways which serve explicit organizational interests. It is, if you will, a sort of “enlightened stupidity”:

Functional stupidity is organizationally-supported lack of reflexivity, substantive reasoning, and justification (my italics). It entails a refusal to use intellectual resources outside a narrow and “safe” terrain. It can provide a sense of certainty that allows organizations to function smoothly. This can save the organization and its members from the frictions provoked by doubt and reflection. Functional stupidity contributes to maintaining and strengthening organizational order. It can also motivate people, help them to cultivate their careers, and subordinate them to socially acceptable forms of management and leadership. Such positive outcomes can further reinforce functional stupidity.

The terms I italicized are important, so let’s define each one.

Link: Chris Hedges: War is Betrayal

War is always about betrayal—betrayal of the young by the old, of idealists by cynics, and of soldiers by politicians.

We condition the poor and the working class to go to war. We promise them honor, status, glory, and adventure. We promise boys they will become men. We hold these promises up against the dead-end jobs of small-town life, the financial dislocations, credit card debt, bad marriages, lack of health insurance, and dread of unemployment. The military is the call of the Sirens, the enticement that has for generations seduced young Americans working in fast food restaurants or behind the counters of Walmarts to fight and die for war profiteers and elites.

The poor embrace the military because every other cul-de-sac in their lives breaks their spirit and their dignity. Pick up Erich Maria Remarque’s All Quiet on the Western Front or James Jones’s From Here to Eternity. Read Henry IV. Turn to the Iliad. The allure of combat is a trap, a ploy, an old, dirty game of deception in which the powerful, who do not go to war, promise a mirage to those who do.

I saw this in my own family. At the age of ten I was given a scholarship to a top New England boarding school. I spent my adolescence in the schizophrenic embrace of the wealthy, on the playing fields and in the dorms and classrooms that condition boys and girls for privilege, and came back to my working-class relations in the depressed former mill towns in Maine. I traveled between two universes: one where everyone got chance after chance after chance, where connections and money and influence almost guaranteed that you would not fail; the other where no one ever got a second try. I learned at an early age that when the poor fall no one picks them up, while the rich stumble and trip their way to the top.

Those I knew in prep school did not seek out the military and were not sought by it. But in the impoverished enclaves of central Maine, where I had relatives living in trailers, nearly everyone was a veteran. My grandfather. My uncles. My cousins. My second cousins. They were all in the military. Some of them—including my Uncle Morris, who fought in the infantry in the South Pacific during World War II—were destroyed by the war. Uncle Morris drank himself to death in his trailer. He sold the hunting rifle my grandfather had given to me to buy booze.

(Source: sunrec)

Link: Rough Justice

Locking up offenders does little to prevent crime or make us safer. The history behind our impulse to punish.

… In the ’70s, Ted Bundy raped and murdered at least thirty women, sometimes defiling their corpses. A decade later, Jeffrey Dahmer raped, murdered, and, in some instances, cannibalized seventeen young men and boys, many of whom he lured back to his Milwaukee, Wisconsin, apartment from gay bars. Around the same time, Paul Bernardo committed countless rapes and sexual assaults in southern Ontario, and then, with his wife, Karla Homolka, kidnapped, raped, tortured, and murdered three women, including Homolka’s younger sister. In 2009 and 2010, Russell Williams, the commander of the Canadian Forces Base in Trenton, Ontario, sexually assaulted and murdered two women. And in the summer of 2011, Anders Breivik bombed a government building in Oslo, killing eight people, and later gunned down sixty-nine others at a Labour Party youth camp.

These extreme cases constitute a vanishing fraction of even the worst violent crimes, but they form a model of absolute, uncomplicated evil against which lesser infractions are measured: this is crime in its pure state. We naturally view these stories through the lens of the victims, identifying with their suffering and the grief of their families and friends; we look upon the perpetrators as incomprehensible aliens. The only satisfying outcome is swift, decisive punishment. In such instances, punishment is an attempt to erase the blight of evil, to heal a grievous social wound, and for that to happen the punishment must fit the crime.

At its most basic, punishment is hard to distinguish from revenge, and the impulse for retaliation is no doubt hard-wired. Studies have shown that card players will give up benefits to themselves, such as a winning hand, to expose and penalize cheaters: punishment, in the short run, trumps even self-interest. Psychologists who use game theory to study the evolution of co-operation have found that the threat of swift retaliation against those who engage in uncooperative behaviour is crucial to establishing stable, productive communities; we have a collective stake in the assurance that those who profit at our expense will pay a steep price. The idea of community, it seems—one of Homo sapiens’ chief advantages in the competition for scarce resources—emerged under the dark shadow of punishment.

Nonetheless, retaliation has no intrinsic moral legitimacy, and since it involves hurting another person, the victim might well perceive him- or herself as harmed and retaliate in kind, creating an open-ended cycle of tit-for-tat blood feuds, waged throughout history and still common in such places as rural Albania. For punishment to be something greater than mere retaliation, it must have a deeper grounding, and that is just what early Babylonian law and the Hebrew Bible endeavoured to teach.

A well-known passage in Leviticus reads, “And he that killeth any man shall surely be put to death. And he that killeth a beast shall make it good; beast for beast. And if a man cause a blemish in his neighbour; as he hath done, so shall it be done to him. Breach for breach, eye for eye, tooth for tooth.” It ends with the refrain “Ye shall have one manner of law, as well for the stranger, as for one of your own country; for I am the Lord your God.” This passage (and others like it) is remarkable in a number of ways. First, it implies the existence of a specific punishment that mirrors the crime; and, more important, it insists that everyone, everywhere, be subject to the same standard. The Hebrew Bible rejects personal or tribal justice, instead asserting retributive justice and the supremacy of the rule of law in its strictest form.

The trouble with retributive justice is that a literal reading of the “eye for an eye” passage leads to morbidly comical conclusions and boundless forms of cruelty. In many situations, it is not even clear what an appropriate equivalent means: one rabbi noted that if a blind man puts out someone’s eyes, it is impossible to blind him in return. In the case of extreme crimes, such as Bernardo’s or Breivik’s, or horrors as immense as the Holocaust, no punishment could compensate for the victims’ suffering. Jesus’ direction “Ye have heard that it hath been said, an eye for an eye, and a tooth for a tooth: But I say unto you, that ye resist not evil: but whosoever shall smite thee on thy right cheek, turn to him the other also” is not so much a criticism of the rabbinical courts of the Second Temple period, which were notably humane (crucifixion was a Roman practice, and capital punishment was used sparingly under the Pharisees). Rather, it was a way of pointing out that retributive justice can devolve into vengefulness as destructive as the crime itself.

The story of punishment from the Middle Ages through the eighteenth century is one of shocking brutality and bewildering arbitrariness. Regicides, parricides, ordinary murderers, homosexuals, heretics, and witches were broken on the wheel, disembowelled, ripped open with red-hot pincers, burned, and drawn and quartered; even teenage pickpockets were put to death. By the late eighteenth century, however, what French historian and philosopher Michel Foucault describes in his seminal 1975 book, Discipline and Punish: The Birth of the Prison, as “the gloomy festival of punishment” was ceding to a model oriented toward determining the impact of crime on society and the need for deterrence. This shift was due in large measure to a little book by an otherwise obscure Italian jurist and philosopher named Cesare Beccaria.

“Observe that by justice I understand nothing more than that bond which is necessary to keep the interest of individuals united, without which men would return to their original state of barbarity,” Beccaria wrote in 1764, in On Crimes and Punishments. “All punishments which exceed the necessity of preserving this bond are in their nature unjust.” He shifts the question from the criminal act to its effect on the community where it was carried out. Therefore, the purpose and justification of punishment is not to satisfy the victim or eradicate evil, but to repair the damage caused to society and prevent future crimes—and to do so without causing further harm. “There should be a proportion of punishment to crimes based on the degree to which the crimes affect society,” he writes. “Crimes are only to be measured by the injury done to society.”

Alas, it is difficult to quantify this, and even if one could, the punishment might well fall shy of the weight assigned to the crime by the victim. Think of the virulent outcry that accompanies the release from prison of convicted rapists and pedophiles. Beccaria’s notion of punishment ends up facing the same conundrum that both Jesus and the rabbis of the Talmud identified: how does one assign a value to a crime? A rape, for instance, may well affect the victim for the rest of her life, and such traumas end up being transmitted across generations. Furthermore, it is near-impossible to assess the real deterrent value of a punishment; criminals are not in the habit of maximizing the marginal utility of their actions. In any case, the sources of crime are complicated and myriad: childhood trauma, chronic poverty, addiction, and psychiatric conditions among them. The two major theories of punishment—retributive, and the one proposed by Beccaria and refined over the past 250 years—suffer from similar faults: vagueness and arbitrariness. And that is before we attempt to translate a theory of punishment into a criminal justice system for a twenty-first-century Western democracy.

The concept of punishment that operates in the criminal justice systems in Canada and the US is a hodgepodge of the retributive and the deterrent. The death penalty and multiple life sentences without the possibility of parole are clearly retributive; custodial sentences for the possession of drugs are seemingly meant to be deterrent. This is why the punishments meted out can seem random: people who are no risk to anyone end up in prison, where the deterrent effect is negligible, and where the punishment appears out of proportion with the crime. Why should anyone do jail time for growing marijuana? Odder still, in Canada judges issue longer sentences to people who grow pot in rented apartments than to those who do so in their own homes—an apparent incentive for home ownership.

The mandatory sentences popular in the US and their increasing prevalence in Canada further widens the gulf between crime and punishment. Meanwhile, more families are broken up and more neighbourhoods ravaged, and more inmates are released after substantial sentences without the necessary skills or resources to reintegrate into society. This undermines whatever deterrent value the punishment was intended to have in the first place. The accepted wisdom is that criminals deserve punishment, but what does that mean, and does it solve anything? Indeed, at this point, it is not even obvious what punishment is supposed to be.

Link: Society is handcuffed in the Prisoner’s Dilemma

How fear, distrust and a lack of organization continually does us in.

The prisoner’s dilemma is a canonical example of a game analyzed in game theory that shows why two individuals might not cooperate, even if it appears that it is in their best interests to do so. It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and gave it the name “prisoner’s dilemma” (Poundstone, 1992), presenting it as follows:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don’t have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch … If bothprisoners testify against each other, both will be sentenced to two years in jail.

In this classic version of the game, collaboration is dominated by betrayal; if the other prisoner chooses to stay silent, then betraying them gives a better reward (no sentence instead of one year), and if the other prisoner chooses to betray then betraying them also gives a better reward (two years instead of three). Because betrayal always rewards more than cooperation, all purely rational self-interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them both to betray each other. The interesting part of this result is that pursuing individual reward logically leads the prisoners to both betray, but they would get a better reward if they both cooperated.

The very next sentence in Wikipedia’s Prisoner’s Dilemma entry is “In reality, humans display a systematic bias towards cooperative behavior in this and similar games, much more so than predicted by a theory based only on rational self-interested action.”

Link: Excerpt from They Thought They Were Free: The Germans, 1933-45” by Milton Mayer

“What happened here was the gradual habituation of the people, little by little, to being governed by surprise; to receiving decisions deliberated in secret; to believing that the situation was so complicated that the government had to act on information which the people could not understand, or so dangerous that, even if the people could not understand it, it could not be released because of national security. And their sense of identification with Hitler, their trust in him, made it easier to widen this gap and reassured those who would otherwise have worried about it. This separation of government from people, this widening of the gap, took place so gradually and so insensibly, each step disguised (perhaps not even intentionally) as a temporary emergency measure or associated with true patriotic allegiance or with real social purposes. And all the crises and reforms (real reforms, too) so occupied the people that they did not see the slow motion underneath, of the whole process of government growing remoter and remoter. […] To live in this process is absolutely not to be able to notice it—please try to believe me—unless one has a much greater degree of political awareness, acuity, than most of us had ever had occasion to develop. Each step was so small, so inconsequential, so well explained or, on occasion, ‘regretted,’ that, unless one were detached from the whole process from the beginning, unless one understood what the whole thing was in principle, what all these ‘little measures’ that no ‘patriotic German’ could resent must some day lead to, one no more saw it developing from day to day than a farmer in his field sees the corn growing. One day it is over his head.

(Source: sunrec, via sunrec)

Link: Your Lifestyle Has Already Been Designed

… Here in the West, a lifestyle of unnecessary spending has been deliberately cultivated and nurtured in the public by big business. Companies in all kinds of industries have a huge stake in the public’s penchant to be careless with their money. They will seek to encourage the public’s habit of casual or non-essential spending whenever they can.

In the documentary The Corporation, a marketing psychologist discussed one of the methods she used to increase sales. Her staff carried out a study on what effect the nagging of children had on their parents’ likelihood of buying a toy for them. They found out that 20% to 40% of the purchases of their toys would not have occurred if the child didn’t nag its parents. One in four visits to theme parks would not have taken place. They used these studies to market their products directly to children, encouraging them to nag their parents to buy.

This marketing campaign alone represents many millions of dollars that were spent because of demand that was completely manufactured.

“You can manipulate consumers into wanting, and therefore buying, your products. It’s a game.” ~ Lucy Hughes, co-creator of “The Nag Factor”

This is only one small example of something that has been going on for a very long time. Big companies didn’t make their millions by earnestly promoting the virtues of their products, they made it by creating a culture of hundreds of millions of people that buy way more than they need and try to chase away dissatisfaction with money.

We buy stuff to cheer ourselves up, to keep up with the Joneses, to fulfill our childhood vision of what our adulthood would be like, to broadcast our status to the world, and for a lot of other psychological reasons that have very little to do with how useful the product really is. How much stuff is in your basement or garage that you haven’t used in the past year?

The ultimate tool for corporations to sustain a culture of this sort is to develop the 40-hour workweek as the normal lifestyle. Under these working conditions people have to build a life in the evenings and on weekends. This arrangement makes us naturally more inclined to spend heavily on entertainment and conveniences because our free time is so scarce.

I’ve only been back at work for a few days, but already I’m noticing that the more wholesome activities are quickly dropping out of my life: walking, exercising, reading, meditating, and extra writing.

The one conspicuous similarity between these activities is that they cost little or no money, but they take time.

Suddenly I have a lot more money and a lot less time, which means I have a lot more in common with the typical working North American than I did a few months ago. While I was abroad I wouldn’t have thought twice about spending the day wandering through a national park or reading my book on the beach for a few hours. Now that kind of stuff feels like it’s out of the question. Doing either one would take most of one of my precious weekend days!

The last thing I want to do when I get home from work is exercise. It’s also the last thing I want to do after dinner or before bed or as soon as I wake, and that’s really all the time I have on a weekday.

This seems like a problem with a simple answer: work less so I’d have more free time. I’ve already proven to myself that I can live a fulfilling lifestyle with less than I make right now. Unfortunately, this is close to impossible in my industry, and most others. You work 40-plus hours or you work zero. My clients and contractors are all firmly entrenched in the standard-workday culture, so it isn’t practical to ask them not to ask anything of me after 1pm, even if I could convince my employer not to.

The eight-hour workday developed during the industrial revolution in Britain in the 19th century, as a respite for factory workers who were being exploited with 14- or 16-hour workdays.

As technologies and methods advanced, workers in all industries became able to produce much more value in a shorter amount of time. You’d think this would lead to shorter workdays.

But the 8-hour workday is too profitable for big business, not because of the amount of work people get done in eight hours (the average office worker gets less than three hours of actual work done in 8 hours) but because it makes for such a purchase-happy public. Keeping free time scarce means people pay a lot more for convenience, gratification, and any other relief they can buy. It keeps them watching television, and its commercials. It keeps them unambitious outside of work.

We’ve been led into a culture that has been engineered to leave us tired, hungry for indulgence, willing to pay a lot for convenience and entertainment, and most importantly, vaguely dissatisfied with our lives so that we continue wanting things we don’t have. We buy so much because it always seems like something is still missing.

Western economies, particularly that of the United States, have been built in a very calculated manner on gratification, addiction, and unnecessary spending. We spend to cheer ourselves up, to reward ourselves, to celebrate, to fix problems, to elevate our status, and to alleviate boredom.

Can you imagine what would happen if all of America stopped buying so much unnecessary fluff that doesn’t add a lot of lasting value to our lives?

Link: The Downside of Diversity

A Harvard political scientist finds that diversity hurts civic life. What happens when a liberal scholar unearths an inconvenient truth? 

It has become increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.

But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings.

"The extent of the effect is shocking," says Scott Page, a University of Michigan political scientist.

The study comes at a time when the future of the American melting pot is the focus of intense political debate, from immigration to race-based admissions to schools, and it poses challenges to advocates on all sides of the issues. The study is already being cited by some conservatives as proof of the harm large-scale immigration causes to the nation’s social fabric. But with demographic trends already pushing the nation inexorably toward greater diversity, the real question may yet lie ahead: how to handle the unsettling social changes that Putnam’s research predicts.

"We can’t ignore the findings," says Ali Noorani, executive director of the Massachusetts Immigrant and Refugee Advocacy Coalition. "The big question we have to ask ourselves is, what do we do about it; what are the next steps?"

The study is part of a fascinating new portrait of diversity emerging from recent scholarship. Diversity, it shows, makes us uncomfortable — but discomfort, it turns out, isn’t always a bad thing. Unease with differences helps explain why teams of engineers from different cultures may be ideally suited to solve a vexing problem. Culture clashes can produce a dynamic give-and-take, generating a solution that may have eluded a group of people with more similar backgrounds and approaches. At the same time, though, Putnam’s work adds to a growing body of research indicating that more diverse populations seem to extend themselves less on behalf of collective needs and goals.

His findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis in June in the journal Scandinavian Political Studies, he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

"Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk," wrote conservative commentator Ilana Mercer, in a recent Orange County Register op-ed titled "Greater diversity equals more misery."

Putnam has long staked out ground as both a researcher and a civic player, someone willing to describe social problems and then have a hand in addressing them. He says social science should be “simultaneously rigorous and relevant,” meeting high research standards while also “speaking to concerns of our fellow citizens.” But on a topic as charged as ethnicity and race, Putnam worries that many people hear only what they want to.

"It would be unfortunate if a politically correct progressivism were to deny the reality of the challenge to social solidarity posed by diversity," he writes in the new report. "It would be equally unfortunate if an ahistorical and ethnocentric conservatism were to deny that addressing that challenge is both feasible and desirable."

Link: Marilyn Manson on the 1999 Columbine shooting

“The devil we blame our atrocities on is really just each one of us”

It is sad to think that the first few people on earth needed no books, movies, games or music to inspire cold-blooded murder. The day that Cain bashed his brother Abel’s brains in, the only motivation he needed was his own human disposition to violence. Whether you interpret the Bible as literature or as the final word of whatever God may be, Christianity has given us an image of death and sexuality that we have based our culture around. A half-naked dead man hangs in most homes and around our necks, and we have just taken that for granted all our lives. Is it a symbol of hope or hopelessness? The world’s most famous murder-suicide was also the birth of the death icon – the blueprint for celebrity. Unfortunately, for all of their inspiring morality, nowhere in the Gospels is intelligence praised as a virtue.

A lot of people forget or never realize that I started my band as a criticism of these very issues of despair and hypocrisy. The name Marilyn Manson has never celebrated the sad fact that America puts killers on the cover of Time magazine, giving them as much notoriety as our favorite movie stars. From Jesse James to Charles Manson, the media, since their inception, have turned criminals into folk heroes. They just created two new ones when they plastered those dip-shits Dylan Klebold and Eric Harris’ pictures on the front of every newspaper. Don’t be surprised if every kid who gets pushed around has two new idols.

We applaud the creation of a bomb whose sole purpose is to destroy all of mankind, and we grow up watching our president’s brains splattered all over Texas. Times have not become more violent. They have just become more televised. Does anyone think the Civil War was the least bit civil? If television had existed, you could be sure they would have been there to cover it, or maybe even participate in it, like their violent car chase of Princess Di. Disgusting vultures looking for corpses, exploiting, fucking, filming and serving it up for our hungry appetites in a gluttonous display of endless human stupidity.

When it comes down to who’s to blame for the high school murders in Littleton, Colorado, throw a rock and you’ll hit someone who’s guilty. We’re the people who sit back and tolerate children owning guns, and we’re the ones who tune in and watch the up-to-the-minute details of what they do with them. I think it’s terrible when anyone dies, especially if it is someone you know and love. But what is more offensive is that when these tragedies happen, most people don’t really care any more than they Would about the season finale of Friends or The Real World. I was dumbfounded as I watched the media snake right in, not missing a teardrop, interviewing the parents of dead children, televising the funerals. Then came the witch hunt.

(Source: sunrec)

Link: Riding the Subway as Therapy

Public transportation, as Simmel pondered in the early 20th century, equalizes the human experience. It is the fundamental, almost ritualistic glue of urban life: traders or artists, rich or poor, we all inevitably occupy the same confined space for hours each week. Often the experience of riding the subway is terrible, but it’s a shared terror, one where its inconveniences (delays, unruly homeless men and women, drunks, and the like) elicit furtive glances and shared exchanges of rolled eyes before riders transition back into their shared state of civil inattention.

This is why riding the subway has become something of a therapeutic experience for me. An orchestrated meeting with friends has an undertone of concern and necessity: we are here to fill me with booze and keep me focused on other things. It is an abnormal exercise, a break from routine with a very specific goal. Going to the gym for relief is similar: post-trauma exercising binges are designed to simultaneously exhaust and distract, cutting the day’s frustrations with endorphins. A ride on the subway is an exercise in solidarity by shared banality. The paradox of the subway allows us to work things out in solitude but to do so in the comforting presence of other people, under the shared solidarity of subway introspection.

People-watching itself, the ultimate exercise in passive-aggressive stimulation, becomes an accidental act of introspection. On the Red Line, I start making up stories about my fellow commuters. Frumpled Suit blew a major deal today; anxious, he occasionally fingers his wedding ring. He gets off the train in Van Ness, and I imagine his climb up the station’s escalator as one last chance to formulate his thoughts before he goes home. Cute Young Professional is reading ‘Eat, Pray, Love’ and sighs a lot. I anticipate she’ll go home and look at plane tickets to exotic locations in India and Western Europe before returning to the work she brought home from the office. Too Many Grocery Bags looks like she’s about to cry. I don’t even want to imagine a story for her, because so do I. Every person who entered and exited that car has had the worst day of their life, or maybe their best. Some are beginning new lives, reinventing themselves in the arms of a new city and a new job. Some are dying, or wasting away in the throes of their own private crises. But we share this structured retreat into our interior worlds. The subway becomes a shared celebration of the victories of the day and an exercise in collective mourning, a place where our people’s lives intersect for split second of collective frustration. Seemingly isolated by frustration, or anger, or sadness of personal or professional stress, the subway offers therapy through collective anxiety. Alone in a crowd, I exist in the negative space between a multitude of interior worlds. Somehow, I gradually regain my sense of regularity, of focus.

(Source: inlikewiththecity, via whosecityisthis)

Link: The Behavioral Sink

How do you design a utopia? In 1972, John B. Calhoun detailed the specifications of his Mortality-Inhibiting Environment for Mice: a practical utopia built in the laboratory. Every aspect of Universe 25—as this particular model was called—was pitched to cater for the well-being of its rodent residents and increase their lifespan. The Universe took the form of a tank, 101 inches square, enclosed by walls 54 inches high. The first 37 inches of wall was structured so the mice could climb up, but they were prevented from escaping by 17 inches of bare wall above. Each wall had sixteen vertical mesh tunnels—call them stairwells—soldered to it. Four horizontal corridors opened off each stairwell, each leading to four nesting boxes. That means 256 boxes in total, each capable of housing fifteen mice. There was abundant clean food, water, and nesting material. The Universe was cleaned every four to eight weeks. There were no predators, the temperature was kept at a steady 68°F, and the mice were a disease-free elite selected from the National Institutes of Health’s breeding colony. Heaven.

Four breeding pairs of mice were moved in on day one. After 104 days of upheaval as they familiarized themselves with their new world, they started to reproduce. In their fully catered paradise, the population increased exponentially, doubling every fifty-five days. Those were the good times, as the mice feasted on the fruited plain. To its members, the mouse civilization of Universe 25 must have seemed prosperous indeed. But its downfall was already certain—not just stagnation, but total and inevitable destruction.

Calhoun’s concern was the problem of abundance: overpopulation. As the name Universe 25 suggests, it was not the first time Calhoun had built a world for rodents. He had been building utopian environments for rats and mice since the 1940s, with thoroughly consistent results. Heaven always turned into hell. They were a warning, made in a postwar society already rife with alarm over the soaring population of the United States and the world. Pioneering ecologists such as William Vogt and Fairfield Osborn were cautioning that the growing population was putting pressure on food and other natural resources as early as 1948, and both published bestsellers on the subject. The issue made the cover of Time magazine in January 1960. In 1968, Paul Ehrlich published The Population Bomb, an alarmist work suggesting that the overcrowded world was about to be swept by famine and resource wars. After Ehrlich appeared on The Tonight Show with Johnny Carson in 1970, his book became a phenomenal success. By 1972, the issue reached its mainstream peak with the report of the Rockefeller Commission on US Population, which recommended that population growth be slowed or even reversed.

But Calhoun’s work was different. Vogt, Ehrlich, and the others were neo-Malthusians, arguing that population growth would cause our demise by exhausting our natural resources, leading to starvation and conflict. But there was no scarcity of food and water in Calhoun’s universe. The only thing that was in short supply was space. This was, after all, “heaven”—a title Calhoun deliberately used with pitch-black irony. The point was that crowding itself could destroy a society before famine even got a chance. In Calhoun’s heaven, hell was other mice.

(Source: sunrec)

I admit not being able to define, not even for even stronger reasons to propose, an ideal social model for the functioning of our scientific or technological society. On the other hand, one of the most urgent tasks, before everything else, is that we are used to consider, at least in our European society, that power is in the hands of the government and is exerted by some particular institutions such as local governments, the police, the Army. These institutions transmit the orders, apply them and punish people who don’t obey. But, I think that the political power is also exerted by a few other institutions which seem to have nothing in common with the political power, which seem to be independent, but which actually aren’t. We all know that university and the whole educational system that is supposed to distribute knowledge, we know that the educational system maintains the power in the hands of a certain class and exclude the other social class from this power… . It seems to me that the real political task in a society such as ours is to criticize the workings of institutions that appear to be both neutral and independent; to criticize and attack them in such a manner that political violence has always exercised itself obscurely through them will be unmasked, so that one can fight against them.
Michel Foucault, “On the Topic of Future Society” from Conversations with Noam Chomsky, c. 1971

(Source: nickkahler, via melancholic-despondency-deactiv)

Link: Why Mass Incarceration Defines Us As a Society

It is late in the afternoon in Montgomery. The banks of the Alabama River are largely deserted. Bryan Stevenson and I walk slowly up the cobblestones from the expanse of the river into the city. We pass through a small, gloomy tunnel beneath some railway tracks, climb a slight incline and stand at the head of Commerce Street, which runs into the heart of Alabama’s capital. The walk was one of the most notorious in the antebellum South.

“This street was the most active slave-trading space in America for almost a decade,” Stevenson says. Four slave depots stood nearby. “They would bring people off the boat. They would parade them up the street in chains. White plantation owners and local slave traders would get on the sidewalks. They’d watch them as they went up the street. Then they would follow behind up to the circle. And that is when they would have their slave auctions.

“Anybody they didn’t sell that day they would keep in these slave depots,” he continues.

We walk past a monument to the Confederate flag as we retrace the steps taken by tens of thousands of slaves who were chained together in coffles. The coffles could include 100 or more men, women and children, all herded by traders who carried guns and whips. Once they reached Court Square, the slaves were sold. We stand in the square. A bronze fountain with a statue of the Goddess of Liberty spews jets of water in the plaza.

“Montgomery was notorious for not having rules that required slave traders to prove that the person had been formally enslaved,” Stevenson says. “You could kidnap free black people, bring them to Montgomery and sell them. They also did not have rules that restricted the purchasing of partial families.”

We fall silent. It was here in this square—a square adorned with a historical marker celebrating the presence in Montgomery of Jefferson Davis, the president of the Confederacy—that men and women fell to their knees weeping and beseeched slave-holders not to separate them from their husbands, wives or children. It was here that girls and boys screamed as their fathers or mothers were taken from them.

“This whole street is rich with this history,” he says. “But nobody wants to talk about this slavery stuff. Nobody.” He wants to start a campaign to erect monuments to that history, on the sites of lynchings, slave auctions and slave depots. “When we start talking about it, people will be outraged. They will be provoked. They will be angry.”

Stevenson expects anger because he wants to discuss the explosive rise in inmate populations, the disproportionate use of the death penalty against people of color and the use of life sentences against minors as part of a continuum running through the South’s ugly history of racial inequality, from slavery to Jim Crow to lynching.

Equating the enslavement of innocents with the imprisonment of convicted criminals is apt to be widely resisted, but he sees it as a natural progression of his work. Over the past quarter-century, Stevenson has become perhaps the most important advocate for death-row inmates in the United States. But this year, his work on behalf of incarcerated minors thrust him into the spotlight. Marshaling scientific and criminological data, he has argued for a new understanding of adolescents and culpability. His efforts culminated this past June in a Supreme Court ruling effectively barring mandatory life sentences without parole for minors. As a result, approximately 2,000 such cases in the United States may be reviewed.

Stevenson’s effort began with detailed research: Among more than 2,000 juveniles (age 17 or younger) who had been sentenced to life in prison without parole, he and staff members at the Equal Justice Initiative (EJI), the nonprofit law firm he established in 1989, documented 73 involving defendants as young as 13 and 14. Children of color, he found, tended to be sentenced more harshly.

“The data made clear that the criminal justice system was not protecting children, as is done in every other area of the law,” he says. So he began developing legal arguments “that these condemned children were still children.”

Stevenson first made those arguments before the Supreme Court in 2009, in a case involving a 13-year-old who had been convicted in Florida of sexual battery and sentenced to life in prison without parole. The court declined to rule in that case—but upheld Stevenson’s reasoning in a similar case it had heard the same day, Graham v. Florida, ruling that sentencing a juvenile to life without parole for crimes other than murder violated the Eighth Amendment’s ban on cruel and unusual punishment.

Last June, in two cases brought by Stevenson, the court erased the exception for murder. Miller v. Alabama and Jackson v. Hobbs centered on defendants who were 14 when they were arrested. Evan Miller, from Alabama, used drugs and alcohol late into the night with his 52-year-old neighbor before beating him with a baseball bat in 2003 and setting his residence on fire. Kuntrell Jackson, from Arkansas, took part in a 1999 video-store robbery with two older boys, one of whom shot the clerk to death.

The states argued that children and adults are not so different that a mandatory sentence of life imprisonment without parole is inappropriate.

Stevenson’s approach was to argue that other areas of the law already recognized significant differences, noting that children’s brains and adults’ are physiologically distinct. This, he said, is why children are barred from buying alcohol, serving on juries or voting. He argued that the horrific abuse and neglect that drove many of these children to commit crimes were beyond their control. He said science, precedent and consensus among the majority of states confirmed that condemning a child to die in prison, without ever having a chance to prove that he or she had been rehabilitated, constituted cruel and unusual punishment. “It could be argued that every person is more than the worst thing they’ve ever done,” he told the court. “But what this court has said is that children are uniquely more than their worst act.”

The court agreed, 5 to 4, in a landmark decision.

“If ever a pathological background might have contributed to a 14-year-old’s commission of a crime, it is here,” wrote Justice Elena Kagan, author of the court’s opinion in Miller. “Miller’s stepfather abused him; his alcoholic and drug-addicted mother neglected him; he had been in and out of foster care as a result; and he had tried to kill himself four times, the first when he should have been in kindergarten.” Children “are constitutionally different from adults for purposes of sentencing,” she added, because “juveniles have diminished culpability and greater prospects for reform.”

Link: The Psychopath Makeover

Over a 28-year-old single-malt scotch at the Scientific Study of Psychopathy’s biennial bash in Montreal in 2011, I asked Bob Hare, “When you look around you at modern-day society, do you think, in general, that we’re becoming more psychopathic?”

The eminent criminal psychologist and creator of the widely used Psychopathy Checklist paused before answering. “I think, in general, yes, society is becoming more psychopathic,” he said. “I mean, there’s stuff going on nowadays that we wouldn’t have seen 20, even 10 years ago. Kids are becoming anesthetized to normal sexual behavior by early exposure to pornography on the Internet. Rent-a-friend sites are getting more popular on the Web, because folks are either too busy or too techy to make real ones. … The recent hike in female criminality is particularly revealing. And don’t even get me started on Wall Street.”

He’s got a point. In Japan in 2011, a 17-year-old boy parted with one of his own kidneys so he could go out and buy an iPad. In China, following an incident in which a 2-year-old baby was left stranded in the middle of a marketplace and run over, not once but twice, as passersby went casually about their business, an appalled electorate has petitioned the government to pass a good-Samaritan law to prevent such a thing from happening again.

And the new millennium has seemingly ushered in a wave of corporate criminality like no other. Investment scams, conflicts of interest, lapses of judgment, and those evergreen entrepreneurial party tricks of good old fraud and embezzlement are now utterly unprecedented in magnitude. Who’s to blame? In an issue of theJournal of Business Ethics, Clive R. Boddy, a former professor at the Nottingham Business School, contends that it’s psychopaths, pure and simple, who are at the root of all the trouble.

The law itself has gotten in on the act. At the Elizabeth Smart kidnapping trial, in Salt Lake City, the attorney representing Brian David Mitchell—the homeless street preacher and self-proclaimed prophet who abducted, raped, and kept the 14-year-old Elizabeth captive for nine months (according to Smart’s testimony, he raped her pretty much every day over that period)—urged the sentencing judge to go easy on his client, on the grounds that “Ms. Smart overcame it. Survived it. Triumphed over it.” When the lawyers start whipping up that kind of tune, the dance could wind up anywhere.

Of course, it’s not just the lawyers. In a recent study by the Centre for Crime and Justice Studies, in London, 120 convicted street robbers were asked why they did it. The answers were revealing. Kicks. Spur-of-the-moment impulses. Status. And financial gain. In that order. Exactly the kind of casual, callous behavior patterns one often sees in psychopaths.

In fact, in a survey that has so far tested 14,000 volunteers, Sara Konrath and her team at the University of Michigan’s Institute for Social Research has found that college students’ self-reported empathy levels (as measured by the Interpersonal Reactivity Index, a standardized questionnaire containing such items as “I often have tender, concerned feelings for people less fortunate than me” and “I try to look at everybody’s side of a disagreement before I make a decision”) have been in steady decline over the past three decades—since the inauguration of the scale, in fact, back in 1979. A particularly pronounced slump has been observed over the past 10 years. “College kids today are about 40 percent lower in empathy than their counterparts of 20 or 30 years ago,” Konrath reports.

More worrisome still, according to Jean Twenge, a professor of psychology at San Diego State University, is that, during this same period, students’ self-reported narcissism levels have shot through the roof. “Many people see the current group of college students, sometimes called ‘Generation Me,’ ” Konrath continues, “as one of the most self-centered, narcissistic, competitive, confident, and individualistic in recent history.”

Precisely why this downturn in social values has come about is not entirely clear. A complex concatenation of environment, role models, and education is, as usual, under suspicion. But the beginnings of an even more fundamental answer may lie in a study conducted by Jeffrey Zacks and his team at the Dynamic Cognition Laboratory, at Washington University in St. Louis. With the aid of fMRI, Zacks and his co-authors peered deep inside the brains of volunteers as they read stories. What they found provided an intriguing insight into the way our brain constructs our sense of self. Changes in characters’ locations (e.g., “went out of the house into the street”) were associated with increased activity in regions of the temporal lobes involved in spatial orientation and perception, while changes in the objects that a character interacted with (e.g., “picked up a pencil”) produced a similar increase in a region of the frontal lobes known to be important for controlling grasping motions. Most important, however, changes in a character’s goal elicited increased activation in areas of the prefrontal cortex, damage to which results in impaired knowledge of the order and structure of planned, intentional action.