Sunshine Recorder

Link: The Implosion of Meaning in the Media: Excerpt from "Simulacra and Simulations" by Jean Baudrillard

We live in a world where there is more and more information, and less and less meaning. Consider three hypotheses.

Either information produces meaning (a negentropic factor), but cannot make up for the brutal loss of signification in every domain. Despite efforts to reinject message and content, meaning is lost and devoured faster than it can be reinjected. In this case, one must appeal to a base productivity to replace failing media. This is the whole ideology of free speech, of media broken down into innumerable individual cells of transmission, that is, into “antimedia” (pirate radio, etc.).

Or information has nothing to do with signification. It is something else, an operational model of another order, outside meaning and of the circulation of meaning strictly speaking. This is Shannon’s hypothesis: a sphere of information that is purely functional, a technical medium that does not imply any finality of meaning, and thus should also not be implicated in a value judgment. A kind of code, like the genetic code: it is what it is, it functions as it does, meaning is something else that in a sense comes after the fact, as it does for Monod in Chance and Necessity. In this case, there would simply be no significant relation between the inflation of information and the deflation of meaning. Or, very much on the contrary, there is a rigorous and necessary correlation between the two, to the extent that information is directly destructive of meaning and signification, or that it neutralizes them. The loss of meaning is directly linked to the dissolving, dissuasive action of information, the media, and the mass media.

The third hypothesis is the most interesting but flies in the face of every commonly held opinion. Everywhere socialization is measured by the exposure to media messages. Whoever is underexposed to the media is desocialized or virtually asocial. Everywhere information is thought to produce an accelerated circulation of meaning, a plus value of meaning homologous to the economic one that results from the accelerated rotation of capital. Information is thought to create communication, and even if the waste is enormous, a general consensus would have it that nevertheless, as a whole, there be an excess of meaning, which is redistributed in all the interstices of the social - just as consensus would have it that material production, despite its dysfunctions and irrationalities, opens onto an excess of wealth and social purpose. We are all complicitous in this myth. It is the alpha and omega of our modernity, without which the credibility of our social organization would collapse. Well, the fact is that it is collapsing, and for this very reason: because where we think that information produces meaning, the opposite occurs.

Information devours its own content. It devours communication and the social. And for two reasons.

1. Rather than creating communication, it exhausts itself in the act of staging communication. Rather than producing meaning, it exhausts itself in the staging of meaning. A gigantic process of simulation that is very familiar. The nondirective interview, speech, listeners who call in, participation at every level, blackmail through speech: “You are concerned, you are the event, etc.” More and more information is invaded by this kind of phantom content, this homeopathic grafting, this awakening dream of communication. A circular arrangement through which one stages the desire of the audience, the antitheater of communication, which, as one knows, is never anything but the recycling in the negative of the traditional institution, the integrated circuit of the negative. Immense energies are deployed to hold this simulacrum at bay, to avoid the brutal desimulation that would confront us in the face of the obvious reality of a radical loss of meaning.

It is useless to ask if it is the loss of communication that produces this escalation in the simulacrum, or whether it is the simulacrum that is there first for dissuasive ends, to short-circuit in advance any possibility of communication (precession of the model that calls an end to the real). Useless to ask which is the first term, there is none, it is a circular process - that of simulation, that of the hyperreal. The hyperreality of communication and of meaning. More real than the real, that is how the real is abolished. Thus not only communication but the social functions in a closed circuit, as a lure - to which the force of myth is attached. Belief, faith in information attach themselves to this tautological proof that the system gives of itself by doubling the signs of an unlocatable reality.

But one can believe that this belief is as ambiguous as that which was attached to myths in ancient societies. One both believes and doesn’t. One does not ask oneself, “I know very well, but still.” A sort of inverse simulation in the masses, in each one of us, corresponds to this simulation of meaning and of communication in which this system encloses us. To this tautology of the system the masses respond with ambivalence, to deterrence they respond with disaffection, or with an always enigmatic belief. Myth exists, but one must guard against thinking that people believe in it: this is the trap of critical thinking that can only be exercised if it presupposes the naivete and stupidity of the masses.

2. Behind this exacerbated mise-en-scène of communication, the mass media, the pressure of information pursues an irresistible destructuration of the social. Thus information dissolves meaning and dissolves the social, in a sort of nebulous state dedicated not to a surplus of innovation, but, on the contrary, to total entropy.*1 Thus the media are producers not of socialization, but of exactly the opposite, of the implosion of the social in the masses. And this is only the macroscopic extension of the implosion of meaning at the microscopic level of the sign. This implosion should be analyzed according to McLuhan’s formula, the medium is the message, the consequences of which have yet to be exhausted.

That means that all contents of meaning are absorbed in the only dominant form of the medium. Only the medium can make an event - whatever the contents, whether they are conformist or subversive. A serious problem for all counterinformation, pirate radios, antimedia, etc. But there is something even more serious, which McLuhan himself did not see. Because beyond this neutralization of all content, one could still expect to manipulate the medium in its form and to transform the real by using the impact of the medium as form. If all the content is wiped out, there is perhaps still a subversive, revolutionary use value of the medium as such. That is - and this is where McLuhan’s formula leads, pushed to its limit - there is not only an implosion of the message in the medium, there is, in the same movement, the implosion of the medium itself in the real, the implosion of the medium and of the real in a sort of hyperreal nebula, in which even the definition and distinct action of the medium can no longer be determined.

Even the “traditional” status of the media themselves, characteristic of modernity, is put in question. McLuhan’s formula, the medium is the message, which is the key formula of the era of simulation (the medium is the message - the sender is the receiver - the circularity of all poles - the end of panoptic and perspectival space - such is the alpha and omega of our modernity), this very formula must be imagined at its limit where, after all the contents and messages have been volatilized in the medium, it is the medium itself that is volatilized as such. Fundamentally, it is still the message that lends credibility to the medium, that gives the medium its determined, distinct status as the intermediary of communication. Without a message, the medium also falls into the indefinite state characteristic of all our great systems of judgment and value. A single model, whose efficacy is immediate, simultaneously generates the message, the medium, and the “real.” Finally, the medium is the message not only signifies the end of the message, but also the end of the medium. There are no more media in the literal sense of the word (I’m speaking particularly of electronic mass media) - that is, of a mediating power between one reality and another, between one state of the real and another. Neither in content, nor in form. Strictly, this is what implosion signifies. The absorption of one pole into another, the short-circuiting between poles of every differential system of meaning, the erasure of distinct terms and oppositions, including that of the medium and of the real - thus the impossibility of any mediation, of any dialectical intervention between the two or from one to the other. Circularity of all media effects. Hence the impossibility of meaning in the literal sense of a unilateral vector that goes from one pole to another. One must envisage this critical but original situation at its very limit: it is the only one left us. It is useless to dream of revolution through content, useless to dream of a revelation through form, because the medium and the real are now in a single nebula whose truth is indecipherable.

The fact of this implosion of contents, of the absorption of meaning, of the evanescence of the medium itself, of the reabsorption of every dialectic of communication in a total circularity of the model, of the implosion of the social in the masses, may seem catastrophic and desperate. But this is only the case in light of the idealism that dominates our whole view of information. We all live by a passionate idealism of meaning and of communication, by an idealism of communication through meaning, and, from this perspective, it is truly the catastrophe of meaning that lies in wait for us. But one must realize that “catastrophe” has this “catastrophic” meaning of end and annihilation only in relation to a linear vision of accumulation, of productive finality, imposed on us by the system. Etymologically, the term itself only signifies the curvature, the winding down to the bottom of a cycle that leads to what one could call the “horizon of the event,” to an impassable horizon of meaning: beyond that nothing takes place that has meaning for us - but it suffices to get out of this ultimatum of meaning in order for the catastrophe itself to no longer seem like a final and nihilistic day of reckoning, such as it functions in our contemporary imaginary.

Beyond meaning, there is the fascination that results from the neutralization and the implosion of meaning. Beyond the horizon of the social, there are the masses, which result from the neutralization and the implosion of the social.

What is essential today is to evaluate this double challenge the challenge of the masses to meaning and their silence (which is not at all a passive resistance) - the challenge to meaning that comes from the media and its fascination. All the marginal, alternative efforts to revive meaning are secondary in relation to that challenge.

Evidently, there is a paradox in this inextricable conjunction of the masses and the media: do the media neutralize meaning and produce unformed [informe] or informed [informée] masses, or is it the masses who victoriously resist the media by directing or absorbing all the messages that the media produce without responding to them? Sometime ago, in “Requiem for the Media,” I analyzed and condemned the media as the institution of an irreversible model of communication without a response. But today? This absence of a response can no longer be understood at all as a strategy of power, but as a counterstrategy of the masses themselves when they encounter power. What then? Are the mass media on the side of power in the manipulation of the masses, or are they on the side of the masses in the liquidation of meaning, in the violence perpetrated on meaning, and in fascination? Is it the media that induce fascination in the masses, or is it the masses who direct the media into the spectacle? Mogadishu-Stammheim: the media make themselves into the vehicle of the moral condemnation of terrorism and of the exploitation of fear for political ends, but simultaneously, in the most complete ambiguity, they propagate the brutal charm of the terrorist act, they are themselves terrorists, insofar as they themselves march to the tune of seduction (cf. Umberto Eco on this eternal moral dilemma: how can one not speak of terrorism, how can one find a good use of the media - there is none). The media carry meaning and countermeaning, they manipulate in all directions at once, nothing can control this process, they are the vehicle for the simulation internal to the system and the simulation that destroys the system, according to an absolutely Mobian and circular logic - and it is exactly like this. There is no alternative to this, no logical resolution. Only a logical exacerbation and a catastrophic resolution.

Link: Europeana releases 20 million cultural objects into the public domain

Europe's digital library Europeana has been described as the ‘jewel in the crown’ of the sprawling web estate of EU institutions.

It aggregates digitised books, paintings, photographs, recordings and films from over 2,200 contributing cultural heritage organisations across Europe - including major national bodies such as the British Library, the Louvre and the Rijksmuseum.

Today Europeana is opening up data about all 20 million of the items it holds under the CC0 rights waiver. This means that anyone can reuse the data for any purpose - whether using it to build applications to bring cultural content to new audiences in new ways, or analysing it to improve our understanding of Europe’s cultural and intellectual history.

This is a coup d’etat for advocates of open cultural data. The data is being released after a grueling and unenviable internal negotiation process that has lasted over a year - involving countless meetings, workshops, and white papers presenting arguments and evidence for the benefits of openness.

Why does this matter? For one thing it will open the door for better discovery mechanisms for cultural content.

Currently information about digital images of, for example, Max Ernst’s etchings, Kafka’s manuscripts, Henry Fox Talbot’s catotypes, or Etruscan sarcophagi is scattered across numerous institutions, organisations and companies. Getting an accurate overview of where to find (digitised) cultural artefacts by a given artist or on a given topic is often a non-trivial process.

To complicate things even further, many public institutions actively prohibit the redistribution of information in their catalogues (as they sell it to - or are locked into restrictive agreements with - third party companies). This means it is not easy to join the dots to see which items live where across multiple online and offline collections.

Opening up data about these items will enable more collaboration and innovation around the discovery process.

Link: The Slow Web Movement

Timely not real-time. Rhythm not random. Moderation not excess. Knowledge not information. These are a few of the many characteristics of the Slow Web. It’s not so much a checklist as a feeling, one of being at greater ease with the web-enabled products and services in our lives.

The Slow Web Movement is a lot like the Slow Food Movement, in that they’re both blanket terms that mean a lot of different things. Slow Food began in part as a reaction to the opening of a McDonald’s in Piazza di Spagna in Rome, so from its very origin, it was defined by what it’s not. It’s not Fast Food, and we all know what Fast Food is… right?

Yet, if you ask a bunch of people to describe to you the qualities of Fast Food, you’re likely to get a bunch of different answers: it’s made from low-grade ingredients, it’s high in sugar, salt and fat, it’s sold by multinational corporations, it’s devoured quickly and in overlarge portions, it’s McDonaldsTacoBellSubway, even though Subway’s spent a lot of money marketing fresh bread and ingredients but it’s still Fast Food albeit “healthy” Fast Food.

Fast Food has an “I’ll know it when I see it” quality, and it has this quality because it’s describing something greater than all of its individual traits. Fast Food, and consequently, Slow Food, describe a feeling that we get from food.

Slow Web works the same way. Slow Web describes a feeling we get when we consume certain web-enabled things, be it products or content. It is the sum of its parts, but let’s start by describing what it’s not: the Fast Web.

Link: Nicholas Carr on Information and Contemplative Thought

The European: Is that because of the technology’s omnipresence or rather the way we engage with it? You have described how the immersion of browsing the web can’t be compared to that of reading a book.
Carr: If you watch a person using the net, you see a kind of immersion: Often they are very oblivious to what is going on around them. But it is a very different kind of attentiveness than reading a book. In the case of a book, the technology of the printed page focuses our attention and encourages a linear type of thinking. In contrast, the internet seizes our attention only to scatter it. We are immersed because there’s a constant barrage of stimuli coming at us and we seem to be very much seduced by that kind of constantly changing patterns of visual and auditorial stimuli. When we become immersed in our gadgets, we are immersed in a series of distractions rather than a sustained, focused type of thinking.

The European: And yet one can fall down the rabbit hole of Wikipedia; spending hours going from one article to the other, clicking each link that seems interesting.
Carr: It is important to realize that it is no longer just hyperlinks: You have to think of all aspects of using the internet. There are messages coming at us through email, instant messenger, SMS, tweets etc. We are distracted by everything on the page, the various windows, the many applications running. You have to see the entire picture of how we are being stimulated. If you compare that to the placidity of a printed page, it doesn’t take long to notice that the experience of taking information from a printed page is not only different but almost the opposite from taking in information from a network-connected screen. With a page, you are shielded from distraction. We underestimate how the page encourages focussed thinking – which I don’t think is normal for human beings – whereas the screen indulges our desire to be constantly distracted.

The European: Recently, there’s been a rise in the popularity of software tools which simplify the online experience – such as Instapaper or fullscreen apps – all of which leverage the effect you described by emulating the printed page or the typewriter. They block out distractions and rather let the user stare at the plain text or the blinking cursor.
Carr: I am encouraged by services such as Instapaper, Readability or Freedom – applications that are designed to make us more attentive when using the internet. It is a good sign because it shows that some people are concerned about this and sense that they are no longer in control of their attention. Of course there’s an irony in looking for solutions in the same technology that keeps us distracted. The questions is: How broadly are these applications being used? I don’t yet see them moving into the mainstream of peoples’ online experience. There’s a tension between tools that encourage attentive thought and the reading of longer articles, and the cultural trend that everything becomes a constant stream of little bits of information through which we make sense of the world. So far, the stream metaphor is winning, but I hope that the tools for attentiveness become more broadly used. So far, we don’t really know how many people used them and in which way they do.

(Source: sunrec)

Link: The Library of Utopia

Google’s ambitious book-scanning program is foundering in the courts. Now a Harvard-led group is launching its own sweeping effort to put our literary heritage online. Will the Ivy League succeed where Silicon Valley failed?

In his 1938 book World Brain, H.G. Wells imagined a time—not very distant, he believed—when every person on the planet would have easy access to “all that is thought or known.”

The 1930s were a decade of rapid advances in microphotography, and Wells assumed that microfilm would be the technology to make the corpus of human knowledge universally available. “The time is close at hand,” he wrote, “when any student, in any part of the world, will be able to sit with his projector in his own study at his or her convenience to examine any book, any document, in an exact replica.”

Wells’s optimism was misplaced. The Second World War put idealistic ventures on hold, and after peace was restored, technical constraints made his plan unworkable. Though microfilm would remain an important medium for storing and preserving documents, it proved too unwieldy, too fragile, and too expensive to serve as the basis for a broad system of knowledge transmission. But Wells’s idea is still alive. Today, 75 years later, the prospect of creating a public repository of every book ever published—what the Princeton philosopher Peter Singer calls “the library of utopia"—seems well within our grasp. With the Internet, we have an information system that can store and transmit documents efficiently and cheaply, delivering them on demand to anyone with a computer or a smart phone. All that remains to be done is to digitize the more than 100 million books that have appeared since Gutenberg invented movable type, index their contents, add some descriptive metadata, and put them online with tools for viewing and searching. 

It sounds straightforward. And if it were just a matter of moving bits and bytes around, a universal online library might already exist. Google, after all, has been working on the challenge for 10 years. But the search giant’s book program has foundered; it is mired in a legal swamp. Now another momentous project to build a universal library is taking shape. It springs not from Silicon Valley but from Harvard University. The Digital Public Library of America—the DPLA—has big goals, big names, and big contributors. And yet for all the project’s strengths, its success is far from assured.


Are we asking the right questions?
Questions have surprising power to improve our lives, say a group of thinkers, if only we take the trouble to figure out how they work.
Rothstein is the cofounder of the Right Question Institute, a Cambridge-based nonprofit that exists to promote an idea he’s been nursing for more than a decade—that asking good questions is a life skill far more important than we realize. Rothstein, who has a doctorate in education and social policy from Harvard, believes that learning how to ask questions should be considered as critical as learning how to read, write, and do basic math. He thinks the ability to use questions strategically can make people smarter and better at their jobs, and give them more control when dealing with powerful bureaucracies, doctors, and elected officials.
There is, as yet, no field of “question studies,” but Rothstein and his codirector at the Right Question Institute, Luz Santana, are among a handful of thinkers making a career of taking a close look at how questions work, what our brains are doing when they put a question together, and how questions could drive learning, child development, innovation, business strategy, and creativity.
All of them are driven by the belief that a question is more than the simple thing we might think it is—that, in fact, it’s a unique instrument that we can get better at using if we try. Wielded with purpose and care, a question can become a sophisticated and potent tool to expand minds, inspire new ideas, and give us surprising power at moments when we might not believe we have any.
For Watts, a good question is one that is both “interesting” and “answerable.” “It’s relatively easy to come up with an answerable question that is not interesting,” he said, “and it’s relatively easy to come up with an interesting question that is unanswerable.” McKinney describes something similar in his book, writing that good questions are ones that can only be answered through investigation, such as, “What is surprisingly inconvenient about my product?” and “Who is using my product in a way I never intended—and how?”
Of course, for most people, asking questions is usually not just about coming up with innovative ideas—it’s about extracting information from others. But even seemingly factual questions can be deployed tactically: In their new book from Harvard Education Press, “Make Just One Change,” Rothstein and Santana from the Right Question Institute outline a basic classification system, dividing questions into ones that can be answered with a single word (like “yes” or “no”) and ones that require a more discursive response. Choosing the right question is in part a matter of making the right trade-off between clarity and depth: “Does the president support gay marriage?” versus “How have the president’s views on gay marriage evolved?” As part of their “Question Formulation Technique,” which is what the kids at Cambridge Rindge and Latin were engaged in that Friday morning, they ask people to transform one type of question into the other, in order to demonstrate that the way a question is structured can determine the range of possible answers it can inspire.

Are we asking the right questions?

Questions have surprising power to improve our lives, say a group of thinkers, if only we take the trouble to figure out how they work.

Rothstein is the cofounder of the Right Question Institute, a Cambridge-based nonprofit that exists to promote an idea he’s been nursing for more than a decade—that asking good questions is a life skill far more important than we realize. Rothstein, who has a doctorate in education and social policy from Harvard, believes that learning how to ask questions should be considered as critical as learning how to read, write, and do basic math. He thinks the ability to use questions strategically can make people smarter and better at their jobs, and give them more control when dealing with powerful bureaucracies, doctors, and elected officials.

There is, as yet, no field of “question studies,” but Rothstein and his codirector at the Right Question Institute, Luz Santana, are among a handful of thinkers making a career of taking a close look at how questions work, what our brains are doing when they put a question together, and how questions could drive learning, child development, innovation, business strategy, and creativity.

All of them are driven by the belief that a question is more than the simple thing we might think it is—that, in fact, it’s a unique instrument that we can get better at using if we try. Wielded with purpose and care, a question can become a sophisticated and potent tool to expand minds, inspire new ideas, and give us surprising power at moments when we might not believe we have any.

For Watts, a good question is one that is both “interesting” and “answerable.” “It’s relatively easy to come up with an answerable question that is not interesting,” he said, “and it’s relatively easy to come up with an interesting question that is unanswerable.” McKinney describes something similar in his book, writing that good questions are ones that can only be answered through investigation, such as, “What is surprisingly inconvenient about my product?” and “Who is using my product in a way I never intended—and how?”

Of course, for most people, asking questions is usually not just about coming up with innovative ideas—it’s about extracting information from others. But even seemingly factual questions can be deployed tactically: In their new book from Harvard Education Press, “Make Just One Change,” Rothstein and Santana from the Right Question Institute outline a basic classification system, dividing questions into ones that can be answered with a single word (like “yes” or “no”) and ones that require a more discursive response. Choosing the right question is in part a matter of making the right trade-off between clarity and depth: “Does the president support gay marriage?” versus “How have the president’s views on gay marriage evolved?” As part of their “Question Formulation Technique,” which is what the kids at Cambridge Rindge and Latin were engaged in that Friday morning, they ask people to transform one type of question into the other, in order to demonstrate that the way a question is structured can determine the range of possible answers it can inspire.

Link: Who is Social Media Really Working For?

As a lifelong political activist I would like to believe that “digital activism” had tremendous impact and leverage for change. However, as someone who has built his career upon communicating the “magic” of technology to the public on behalf of the leading companies in Silicon Valley, I remain skeptical concerning the democratizing impact of the Net through its newest expression, the social network. It’s my opinion that social networking, as an activist tool, is being vastly oversold. However, this is not without precedent or purpose: Great IPO fortunes depend on this popular misconception.

Given my background, I consider myself inoculated from charges of Luddism or “cyber-pessimism” – a pejorative that I also reject for Mr. Morozov and others who have liberated themselves from what I call “the cult of tech.” Simply defined, the cult of tech is the nexus of technology companies, telecom service providers, tech think-tankers and assorted digerati that derive their livelihood from promoting a digital panacea. These combined interests exert undue influence over an often befuddled popular media struggling to keep up with the “magic” of new tech offerings. For example, the cult of tech jumped at the marketing opportunity to brand an indigenous anti-authoritarian uprising in Iran as the “Twitter Revolution” with scant evidence of the application’s actual impact, negative or positive.

Technology always cuts two ways. Although the personal computer provided empowerment and creative liberation for individuals, and the Internet gave us access to information, they came at a cost. Experiences over the Net require a service provider to mediate connections amongst us. The early 90s freewheeling Internet with hundreds of independent ISPs has devolved into less than a handful of huge players. This new concentration of power, whether as a public or private entity, is cause for concern. Since centralized power is inherently non-democratic, these monolithic network entities are not inclined to liberate humanity. Therefore utopians better think twice if they are depending on the Net to promulgate democracy and freedom. For example:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

John Perry Barlow’s quote is Utopian indeed, a poetic touch, but unfortunately the last sentence is magical thinking. In reality, cyberspace, and the au courant “Cloud,” are not ethereal things – they are physical assets that depend upon tangible presence and resources. Just as we don’t have minds without brains in which consciousness can reside, cyberspace exists in data centers and network switching racks in real locations, owned by huge corporations and governments. Those in control of these physical assets rule over the network and the flow of information. Although they allow for variable amounts of chaos depending upon the cultural context, they are fundamentally authoritarian in structure.

Does social media make any kind of impact in molding opinion? Yes. As with all media types it serves both for good and evil, truth and lies. However, Mr. Morozov and I are on the same page in the belief that cultural and physical realities are the determining factors far more than “friending” a cause. Whether we like it or not, bullets and batons are more potent than bytes. Reality generally trumps virtuality.

In the opening paragraph of Mr. Szoka’s essay, he headlines three “successes” attributable to social network activism – the Obama election, the North African uprisings, and SOPA’s defeat. Below I will argue that all three are actually perfect examples of the medium’s failure to deliver change.


Lost (or gained) in translation
Chinese is ideal for micro-blogs, which typically restrict messages to 140 symbols: most messages do not even reach that limit. Arabic requires a little more space, but written Arabic routinely omits vowels anyway. Arabic tweets mushroomed last year, though thanks to the uprisings across the Middle East rather than to its linguistic properties. It is now the eighth most-used language on Twitter with over 2m public tweets every day. Romance tongues, among others, generally tend to be more verbose, as the chart below shows. So Spanish and Portuguese, the two most frequent European languages in the Twitterverse after English, have tricks to reduce the number of characters. Brazilians use “abs” for abraços (hugs) and “bjs” for beijos (kisses); Spanish speakers need never use personal pronouns (“I go” is denoted by the verb alone: voy). Some people use English to avoid censorship. Micro-bloggers on Sina Weibo (where messages containing some characters are automatically blocked) wrote “Bo” inEnglish the Roman alphabet in order to comment freely about Bo Xilai, a purged party chief.

Lost (or gained) in translation

Chinese is ideal for micro-blogs, which typically restrict messages to 140 symbols: most messages do not even reach that limit. Arabic requires a little more space, but written Arabic routinely omits vowels anyway. Arabic tweets mushroomed last year, though thanks to the uprisings across the Middle East rather than to its linguistic properties. It is now the eighth most-used language on Twitter with over 2m public tweets every day. Romance tongues, among others, generally tend to be more verbose, as the chart below shows. So Spanish and Portuguese, the two most frequent European languages in the Twitterverse after English, have tricks to reduce the number of characters. Brazilians use “abs” for abraços (hugs) and “bjs” for beijos (kisses); Spanish speakers need never use personal pronouns (“I go” is denoted by the verb alone: voy). Some people use English to avoid censorship. Micro-bloggers on Sina Weibo (where messages containing some characters are automatically blocked) wrote “Bo” inEnglish the Roman alphabet in order to comment freely about Bo Xilai, a purged party chief.

Link: The Encyclopedists

An author assesses the importance of the landmark Encyclopédie and the three men who created it.

The eighteenth century has bequeathed to us one work which embodies in itself the spirit of the century,—that is the Encyclopédie. There are, of course, other works of that epoch more perfect, or nearer that perfection which was always the aim of its great authors. These are, however, the works of individual authors, and they give us only the labors of each author separately, while the Encyclopédie gives us the picture of an age which was one of the most important in the history of the world. With the subsidence of the bitter quarrels that characterized the publication of the Encyclopédie, not a little of the popular interest in the work has ceased. It remains, however, the intellectual fortress of its epoch, and although its defenders and its besiegers have lost much of their heat and ardor, it is not because the world of letters is grown more just or more peaceful, but because there are new fields of battle on which the warlike intellects of our own day find plenty to try their mettle. The Encyclopédie may well be read to-day, not for the interest of novelty which it once possessed, but for its importance in the history of literature and philosophy.

There is no recent summary of the lives of the Encyclopédistes; for the most part they are obscure men; and without going as far as Lebas (Dictionnaire Encyclopédique de la France, forming part of Didot’s Univers Pittoresque) in depreciating them, or as far as Lord Brougham in extolling them, it is doubtful whether there is any-where an exact account of all of them. Indeed, many of the numerous contributors wrote only an article or two; the Biographie Universelle probably contains all that is worth knowing of the principal writers, and Grimm’s corre-spondence tells a thousand stories and anecdotes about these. D’Alembert and Voltaire are treated especially by Brougham in his “Men of Letters and Science,” but it is in a merely popular way. It is not easy anywhere to obtain satisfactory and direct reference to authorities on the subject; but perhaps the lives of the three great chiefs, Voltaire, D’Alembert, and Diderot, cover all the necessary grounds of knowledge with regard to their followers.

Two new works of interest, if not of authority, have appeared within this year, and each in its way is worth attention, and is sure to command it, as showing the hold which the Encyclopédistes still have on art and letters. Fichel, the cleverest painter in the newest school of French genre, has lately given us a capital picture, Les Encyclopédistes,—a group of the famous men of that large family,—in a library with the furniture, dress, and appointments of the period. Some of the faces are familiar to us even here,—Voltaire, Diderot, D’Alembert, Rousseau, Buffon,—and the others have also the sharp lines and speaking features of truthful portraits. A desire to find out the unnamed persons in the painting first caused the inquiry into the subject, which now takes this shape. Almost at the same time that Fichel’s picture was given to the world, the Librairie Internationale in Paris published Les Encyclopédistes, leurs Travaux, leurs Doctrines, et leur Influence, par Pascal Duprat,—a readable and attractive volume of nearly two hundred pages. It tells the story of the Encyclopédie, the political and moral state of France when it began, the incidents of its publication, and, sketching the authors who took part in its composition, explains its object and plan, its general spirit, its philosophical doctrine, its politics, its political economy, its influence on the eighteenth century, and the French Revolution, its opponents then and its value now. All this is done briefly, clearly, and well by one of the lesser lights of French letters, who, however, reflects fairly enough the influence, good and bad, which the Encyclopedists continue to exert.

Link: Nicholas Carr on Impact of the Information Age

FiveBooks interviews asks writers, academics, and experts to list recommended books on a given topic.

Is the Internet dividing our attention? Are we so buried in technology that we ignore one another? The technology writer discusses the history and implications of the information age, from the mechanical clock to the iPhone.

Why do you think “consumed” is an ugly term?

I think it’s an ugly term when applied to information. When you talk about consuming information you are talking about information as a commodity, rather than information as the substance of our thoughts and our communications with other people. To talk about consuming it, I think you lose a deeper sense of information as a carrier of meaning and emotion – the matter of intimate intellectual and social exchange between human beings. It becomes more of a product, a good, a commodity.

You discuss other ramifications – ill effects, even – of the information age and the net specifically in your book The Shallows: What the Internet is Doing to Our Brains.

In The Shallows I argue that the Internet fundamentally encourages very rapid gathering of small bits of information – the skimming and scanning of information to quickly get the basic gist of it. What it discourages are therefore the ways of thinking that require greater attentiveness and concentration, everything from contemplation to reflection to deep reading.

The Internet is a hypertext system, which means that it puts lots of links in a text. These links are valuable to us because they allow us to go very quickly between one bit of information and another. But there are studies that compare what happens when a person reads a printed page of text versus when you put links into that text. Even though we may not be conscious of it, a link represents a little distraction, a little division of attention. You can see in the evidence that reading comprehension goes down with hypertext versus plaintext.

The Internet also is the most powerful multimedia technology ever invented. We get information not in one form but in many forms at once – text, sound, animation, images, moving pictures – whereas in the past you had to use different tools to get information in different forms. And it’s an interactive technology, incredibly good at sending messages and alerts. So as we read or take in information in other forms, we also tend to be bombarded by messages that are of interest to us – emails, texts, tweets, Facebook updates and so forth.

So you believe the “link economy” and suchlike leads to attention deficit?

I think that all of those qualities of the net encourage the division of attention, and an almost compulsive gathering of information very quickly. We’ve always skimmed and scanned in some areas of our intellectual lives, and that’s an important capability. But as we begin to carry the Internet with us every day – with the proliferation of first laptops and now smartphones and tablets – I think it is influencing the very way in which we think.

We are losing the balance of our thinking in this constant bombardment of information – those times when we can screen out distractions and spend time concentrating on one thing, or engaging in open-ended contemplation, reflection or introspection. Those qualities of thought, up until recently, were considered the highest and most characteristically human forms of thought. But we seem to be quite happy to throw them overboard in return for the many benefits of our online lives.

What are you basing these cognitive points on?

There is evidence from studies that indicates that we behave in a very mentally scattered way when we’re online. If you look at studies about the way people browse web pages, for instance, most people look at a page for 10 seconds or less, then click off to the next page. There are eye-tracking tests of how people read online, and it tends to be very cursory reading. Studies of email use show that we glance at our inbox something like 30 or 40 times an hour. Or you can see it in the explosion of text messages. The average American teenager today sends or receives well over 3,000 texts a month, which is about one text every six minutes during waking hours. And then there’s the streams of information on Facebook and Twitter.

There are cognitive costs to having this constant stream of interruptions and distractions, and a constant division of attention in perpetual multitasking. When you don’t pay attention you lose the cognitive benefits that come with that, namely the [increased] ability to form long-term memories or to weave information into high-level conceptual thoughts – rather than to simply Google everything and get it in discrete chunks. There is a loss here that until recently we’ve ignored because we have been so enamoured of the benefits of these technologies. The type of thinking that they encourage is, I think, a scattered, superficial and shallow way of thinking.

Link: The Church of Internet Piracy

Sweden’s newest religion may be the only faith that was born out of an insult. The idea to form a church promoting Internet piracy first came to an activist named Peter Sunde “four or five years ago” when he saw a comment by one of the lawyers seeking his prosecution for facilitating copyright infringement. Asked in an interview for her opinion of enthusiasts such as Sunde, at the time the spokesman for the popular file-sharing site Pirate Bay, the lawyer replied: “They’re a cult.” The slur provided a new direction for Sweden’s vibrant anti-copyright community to explore. “We have this history that every time somebody calls us something negative, we just take the name and make it ours,” Sunde says. “We were called pirates, so we said, ‘Let’s make pirates cool.’ O.K., so now, we’re a cult. Let’s make that fun as well.”

Thus was born the Missionary Church of Kopimism. Sunde never took action on his idea, but he mentioned it to his fellow activists and mused openly about it on the Internet. Soon enough a group had gathered that held sacred the act of copying information. The group adopted the keyboard shortcuts for copy and paste, ctrl-C and ctrl-V, as holy symbols—the church has no formal doctrine regarding ctrl-X—and began to develop a theology.

In stark opposition to “Thou shalt not steal,” the church’s central commandment, “Copy and seed,” is a call to download files and make them available for sharing. Life itself, the newly declared believers observed, depends on the replication of cells and the endless duplication of DNA. The church even survived a mini-schism when some believers questioned why their religion had adopted holy symbols popularized by —a company that has increasingly based its business model around the tight control of information—rather than, say, escape-W and ctrl-Y, the copy and paste commands for the open-source text editor Emacs. “For me it’s a fun prank, and it’s even better because I didn’t have to do it myself,” says Sunde.

More than 5,000 people have signed on to Kopimism’s website, according to its directors. While many of the church’s members likely share Sunde’s whimsical attitude, it would be a mistake to say no one takes the religion seriously. Kopimism is far from the first religion to get its start out of political expediency—think Henry VIII—and many of its adherents share deeply held political and philosophical beliefs about freedom of information. “In the beginning, it was a joke,” says Gustav Nipe, the church’s chairman. “But maybe we’ve stepped on something greater than we thought.” Sunde supports the cause, although he hasn’t signed up. “Like most Swedes, I’m an atheist,” he says.

To the extent that Kopimism has a spiritual home outside Internet forums, it is Uppsala, a university town some 40 miles north of Stockholm, where long, dark winters, monotonous weather, and a large student population provide fertile ground for the dissemination of the church’s precepts. For Nicholas Miles, a 21-year-old student of social work at the University of Uppsala, the copying held sacred by Kopimism isn’t just about file sharing (though it’s that, too). “By having this conversation, you and I right now are copying information,” he told me. “Sharing can involve music or a video game or a movie or, indeed, a philosophical text.”


The Narrative Eros of the Infographic
We’ve given today’s visual storytellers considerable  power: for better or worse, they are the new meaning-makers, the priests  of shorthand synthesis. We’re dependent on these priests to scrutinize,  bundle, and produce beautiful information for us so that we can have  our little infogasm and then retweet the information to our friends.

Perhaps you, like me, came across a delightfully elegant, delightfully lucid interactive chart of the European financial crisis in the online edition of The New York Times last  fall. Clicking through its various cataclysmic scenarios, watching the  arrows shift and the pastel circles grow pregnant with debt, I was able  to comprehend, for the first time, the convoluted and potentially toxic  lending relationships between Greece, Italy, and the rest of Schengen  Europe as well as the implications of this toxicity for the wider world.  The reduction of such messiness into such neatness filled me with a  familiar, slightly nauseating feeling of delight, a feeling I have since  dubbed the infogasm. This fleeting sense of the erotic occurs  only when a graphic perfectly clarifies complex phenomena through the  careful arrangement of its visual data sets. The infogasm is  instantaneous, overwhelming, and usually transitory in nature, leaving  you oddly exhausted. Plain old text does not function with quite the the  same epiphanic climax; by comparison, the written word’s magic is  elusive and lingering, often revealing its fruits much later, after the  article has been finished and put away.
In 1976, neuroscientist Douglas Nelson definitively described the cognitive potency of the image as the pictorial superiority effect.  He and others have shown that our brains are essentially hard-wired for  visuals—the very architecture of our visual cortex allows graphics a  unique mainline into our consciousness.  According to Allan Pavio’s somewhat controversial dual-coding theory,  imagery stimulates both verbal and visual representations, whereas  language is primarily processed through only the verbal channel. While  there has been considerable pushback to Pavio’s theory since its  introduction in the 1970s, numerous experiments have shown that imagery  activates multiple, powerful neural pathways of memory recall.
Despite the great pleasures of the infogasm, it is evident that  now, more than ever, we must be cautious with our information design.  Visuals are easy to make, but they are also easy to fake, and their  allure can turn them into potentially dangerous pieces of evidence.  Despite Giner’s manifesto for clear standards in visual journalism,  infographics—guided by designer, journalist, statistician, and artist  alike—will probably continue to operate in that grey area between fact  and fiction, egged on by our insatiable hunger for their graphical eros.

The Narrative Eros of the Infographic

We’ve given today’s visual storytellers considerable power: for better or worse, they are the new meaning-makers, the priests of shorthand synthesis. We’re dependent on these priests to scrutinize, bundle, and produce beautiful information for us so that we can have our little infogasm and then retweet the information to our friends.

Perhaps you, like me, came across a delightfully elegant, delightfully lucid interactive chart of the European financial crisis in the online edition of The New York Times last fall. Clicking through its various cataclysmic scenarios, watching the arrows shift and the pastel circles grow pregnant with debt, I was able to comprehend, for the first time, the convoluted and potentially toxic lending relationships between Greece, Italy, and the rest of Schengen Europe as well as the implications of this toxicity for the wider world. The reduction of such messiness into such neatness filled me with a familiar, slightly nauseating feeling of delight, a feeling I have since dubbed the infogasm. This fleeting sense of the erotic occurs only when a graphic perfectly clarifies complex phenomena through the careful arrangement of its visual data sets. The infogasm is instantaneous, overwhelming, and usually transitory in nature, leaving you oddly exhausted. Plain old text does not function with quite the the same epiphanic climax; by comparison, the written word’s magic is elusive and lingering, often revealing its fruits much later, after the article has been finished and put away.

In 1976, neuroscientist Douglas Nelson definitively described the cognitive potency of the image as the pictorial superiority effect. He and others have shown that our brains are essentially hard-wired for visuals—the very architecture of our visual cortex allows graphics a unique mainline into our consciousness.  According to Allan Pavio’s somewhat controversial dual-coding theory, imagery stimulates both verbal and visual representations, whereas language is primarily processed through only the verbal channel. While there has been considerable pushback to Pavio’s theory since its introduction in the 1970s, numerous experiments have shown that imagery activates multiple, powerful neural pathways of memory recall.

Despite the great pleasures of the infogasm, it is evident that now, more than ever, we must be cautious with our information design. Visuals are easy to make, but they are also easy to fake, and their allure can turn them into potentially dangerous pieces of evidence. Despite Giner’s manifesto for clear standards in visual journalism, infographics—guided by designer, journalist, statistician, and artist alike—will probably continue to operate in that grey area between fact and fiction, egged on by our insatiable hunger for their graphical eros.

Link: Nicholas Carr on Information and Contemplative Thought

The European: Is that because of the technology’s omnipresence or rather the way we engage with it? You have described how the immersion of browsing the web can’t be compared to that of reading a book.
Carr: If you watch a person using the net, you see a kind of immersion: Often they are very oblivious to what is going on around them. But it is a very different kind of attentiveness than reading a book. In the case of a book, the technology of the printed page focuses our attention and encourages a linear type of thinking. In contrast, the internet seizes our attention only to scatter it. We are immersed because there’s a constant barrage of stimuli coming at us and we seem to be very much seduced by that kind of constantly changing patterns of visual and auditorial stimuli. When we become immersed in our gadgets, we are immersed in a series of distractions rather than a sustained, focused type of thinking.

The European: And yet one can fall down the rabbit hole of Wikipedia; spending hours going from one article to the other, clicking each link that seems interesting.
Carr: It is important to realize that it is no longer just hyperlinks: You have to think of all aspects of using the internet. There are messages coming at us through email, instant messenger, SMS, tweets etc. We are distracted by everything on the page, the various windows, the many applications running. You have to see the entire picture of how we are being stimulated. If you compare that to the placidity of a printed page, it doesn’t take long to notice that the experience of taking information from a printed page is not only different but almost the opposite from taking in information from a network-connected screen. With a page, you are shielded from distraction. We underestimate how the page encourages focussed thinking – which I don’t think is normal for human beings – whereas the screen indulges our desire to be constantly distracted.

The European: Recently, there’s been a rise in the popularity of software tools which simplify the online experience – such as Instapaper or fullscreen apps – all of which leverage the effect you described by emulating the printed page or the typewriter. They block out distractions and rather let the user stare at the plain text or the blinking cursor.
Carr: I am encouraged by services such as Instapaper, Readability or Freedom – applications that are designed to make us more attentive when using the internet. It is a good sign because it shows that some people are concerned about this and sense that they are no longer in control of their attention. Of course there’s an irony in looking for solutions in the same technology that keeps us distracted. The questions is: How broadly are these applications being used? I don’t yet see them moving into the mainstream of peoples’ online experience. There’s a tension between tools that encourage attentive thought and the reading of longer articles, and the cultural trend that everything becomes a constant stream of little bits of information through which we make sense of the world. So far, the stream metaphor is winning, but I hope that the tools for attentiveness become more broadly used. So far, we don’t really know how many people used them and in which way they do.

 

Link: Why Privacy Matters Even if You Have "Nothing to Hide"

When the government gathers or analyzes personal information, many people say they’re not worried. “I’ve got nothing to hide,” they declare. “Only if you’re doing something wrong should you worry, and then you don’t deserve to keep it private.”

The nothing-to-hide argument pervades discussions about privacy. The data-security expert Bruce Schneier calls it the “most common retort against privacy advocates.” The legal scholar Geoffrey Stone refers to it as an “all-too-common refrain.” In its most compelling form, it is an argument that the privacy interest is generally minimal, thus making the contest with security concerns a foreordained victory for security.

The nothing-to-hide argument is everywhere. In Britain, for example, the government has installed millions of public-surveillance cameras in cities and towns, which are watched by officials via closed-circuit television. In a campaign slogan for the program, the government declares: “If you’ve got nothing to hide, you’ve got nothing to fear.” Variations of nothing-to-hide arguments frequently appear in blogs, letters to the editor, television news interviews, and other forums. One blogger in the United States, in reference to profiling people for national-security purposes, declares: “I don’t mind people wanting to find out things about me, I’ve got nothing to hide! Which is why I support [the government’s] efforts to find terrorists by monitoring our phone calls!

On the surface, it seems easy to dismiss the nothing-to-hide argument. Everybody probably has something to hide from somebody. As Aleksandr Solzhenitsyn declared, “Everyone is guilty of something or has something to conceal. All one has to do is look hard enough to find what it is.” Likewise, in Friedrich Dürrenmatt’s novella “Traps,” which involves a seemingly innocent man put on trial by a group of retired lawyers in a mock-trial game, the man inquires what his crime shall be. “An altogether minor matter,” replies the prosecutor. “A crime can always be found.”

To evaluate the nothing-to-hide argument, we should begin by looking at how its adherents understand privacy. Nearly every law or policy involving privacy depends upon a particular understanding of what privacy is. The way problems are conceived has a tremendous impact on the legal and policy solutions used to solve them. As the philosopher John Dewey observed, “A problem well put is half-solved.”

Link: Starting Over

In his famous Lectures on Physics, Richard Feynman presented this interesting speculation:

“If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis (or the atomic fact, or whatever you wish to call it) that all things are made of atoms—little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied.”

Fascinated by Feynman’s question, Seed put a similar one to eleven leading thinkers: “Imagine—much as Feynman asked his audience—that in a mission to change everyone’s thinking about the world, you can take only one lesson from your field as a guide. In a single statement, what would it be?” Here are their answers.

#5 “The dazzling diversity of species and biological adaptations over 3.5 billion years of life on Earth owes its existence to “adaptation by natural selection,” which requires just three simple conditions to operate: variation, differential selection (the best performing traits survive and reproduce more effectively than others), and replication of successful traits by subsequent generations, via a double helix of molecules that code for proteins as biological building blocks, or among more complex animals, via imitation or cultural transmission of methods and knowledge.” —Dominic Johnson is a reader in politics and international relations at Edinburgh University.