Sunshine Recorder

Link: Animal Consciousness and the Expansion of the Human Imagination

“The imagination is not a source of deception and delusion, but a capacity to sense what you do not know, to intuit what you cannot understand, to be more than you can know.” —William Irwin Thompson

Thirty years ago, if you mentioned animal consciousness at a psychology conference, you’d risk getting jabbed with a Skinnerian cattle prod by some beady-eyed behaviorist. Animals were largely regarded as stimulus-response machines, devoid of inner life. Dissenters were few and far between.

Fortunately, times have changed. Animal cognition is all the rage, and, following closely behind it is the study of comparative neurobiology. Last July, a consortium of well-known neuroscientists issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” as public an acknowledgment as you are likely to find on behalf of science that, yes, it seems that animals do in fact possess consciousness, or at least they possess the “neurobiological substrates” necessary to “generate” consciousness.

Even if it sounds like common sense, the Declaration is important, for it will pave the way for further study, and, one hopes, increasing respect for the impressive mental capacities of the non-human world.

But in other respects, when it comes to our relationship to animals, we continue to be held back by an even deeper intellectual taboo, and that is the taboo of imagining we can relate to animals in the first place.

“Anthropomorphism!” the scientific censors shout – a terrible thing to be accused of. Many cite the American philosopher Thomas Nagel, who in a famous 1974 essay, “What Is It Like to Be a Bat?” argued that, yes, it is like something to be a bat, but as humans we are largely cut off from that experience. We can simply never know what it is like to be another animal; our mental resources, he writes, “are inadequate to the task.”

Nagel’s paper is an elegant defense of the integrity and irreducibility of inner experience. It was a position that needed to be staked out, for then – as now – many zealous materialists believed that everything important about consciousness could be described by looking at brain-based external measures (Nagel recently expanded his argument in a new book, to the outrage of scientific fundamentalists).

However, in making this important point about consciousness, Nagel inadvertently erected an ideological wall between ourselves and the rest of nature. For it is not true that we cannot say anything at all about what it’s like to be a bat, or, indeed, any animal. As the philosopher Ralph Acamporo says about our scaled and furry cousins, “It doesn’t follow to say that since we don’t know each other fully, we can’t know each other at all.”

Inspired by the Cambridge Declaration, I would like to propose my own declaration on behalf of human imagination and empathy: “The Psychology Tomorrow Declaration of Animal Relatability.”  It hinges on a model I will call “The One and the Many.”

The idea is that, yes, obviously there are private aspects of another organism’s experience that, due to the constraints of our biology, we cannot fully know. We have to respect that; indeed, it should be celebrated. This whole bestiary of mind is a showcase for nature’s fantastic creativity. We rarely think of minds this way, but Darwin’s “endless forms most beautiful” is as true of the mental as it is the material.

But there are also elements we can know and understand. Since we are all descended from a common ancestor, there is always a measure of shared experience conserved in body and mind. This makes perfect sense regarding our closest relations. From an evolutionary standpoint, if a big-brained mammal acts similar to us under similar conditions – if they squeal in pain, or nuzzle affectionately – then it makes sense that some part of the psychology behind their action is similar to ours. The primatologist Frans de Waal says those who reject this obvious insight are in “anthropodenial.” In their groundbreaking work, How Monkeys See the World, the biologists Dorothy Cheney and Robert Seyfarth point out that for field naturalists, a certain amount of anthropomorphizing actually works. “Attributing motives and strategies to animals,” they write, “is often the best way for an observer to predict what an individual is likely to do next.”

Our ability to relate to an animal’s experience shifts in proportion to the species in question. So the pith we share with a bacterium is far narrower than that we may share with a whale, which in turn is perhaps narrower than that we share with the chimp. In a sense, the human-to-animal mind question may simply be an exaggerated version of the human-to-human mind question: we can never entirely know another person’s experience – all the more so if they’re raised in a different culture – but there are deep points of overlap that can, crucially, be expanded.

This last point is important. The ratio of one to many, of the shared to the private – it isn’t static. There’s a malleable area between the two that can be expanded. Like anything else, relating to another being’s perspective is a practice. Spend enough time with dogs and you start to get a feel for what makes them sad or excited, for how they move in space, little hints and flavors of their experience. For most of us this understanding is vague and refracted, but for pet owners and dog trainers, it can be intimate and profound.

“A feeling for the organism” is how the famous geneticist Barbara McClintock described her own intuitions about life. Empathy as a capacity needn’t end at the human genus. It seems to be more a question of how much energy and intelligence and openness you bring to the inquiry. Obviously, the further away you get from the human, the more room for fantasy – this is a genuine risk – but this doesn’t mean there isn’t also a real sensitivity that can be cultivated.

And indeed, when you pan out to the big picture of human knowledge, what you see are multiple lines of inquiry converging on this exact point. From the scientific world, we have the study of animal cognition and communication, as well as more cutting-edge domains like the study of animal sense worlds (or “umwelts”) and embodied cognition. From the philosophical world, investigators are beginning to elaborate a whole series of intriguing approaches, from “affordances” to the phenomenology of “interbeing,” to name just two ideas. All of these lay the groundwork for a kind of radical perspective-taking; they are different ways of illuminating sensibilities we once dismissed as opaque.

My thought is me: that’s why I can’t stop. I exist because I think … and I can’t stop myself from thinking. At this very moment—it’s frightful—if I exist, it is because I am horrified at existing. I am the one who pulls myself from the nothingness to which I aspire: the hatred, the disgust of existing, there are as many ways to make myself exist, to thrust myself into existence. Thoughts are born at the back of me, like sudden giddiness, I feel them being born behind my head … if I yield, they’re going to come round in front of me, between my eyes— and I always yield, the thought grows and grows and there it is, immense, filling me completely and renewing my existence.
— Jean-Paul Sartre, Nausea

(Source: reafan, via onthegenealogyofmyblogging-deac)

Link: "Where Am I?" by Daniel C. Dennett

Excerpt from Brainstorms: Philosophical Essays on Mind and Psychology by Daniel C. Dennett. 

Now that I’ve won my suit under the Freedom of Information Act, I am at liberty to reveal for the first time a curious episode in my life that may be of interest not only to those engaged in research in the philosophy of mind, artificial intelligence, and neuroscience but also to the general public.

Several years ago I was approached by Pentagon officials who asked me to volunteer for a highly dangerous and secret mission. In collaboration with NASA and Howard Hughes, the Department of Defense was spending billions to develop a Supersonic Tunneling Underground Device, or STUD. It was supposed to tunnel through the earth’s core at great speed and deliver a specially designed atomic warhead “right up the Red’s missile silos,” as one of the Pentagon brass put it.

The problem was that in an early test they had succeeded in lodging a warhead about a mile deep under Tulsa, Oklahoma, and they wanted me to retrieve it for them. “Why me?” I asked. Well, the mission involved some pioneering applications of current brain research, and they had heard of my interest in brains and of course my Faustian curiosity and great courage and so forth…. Well, how could I refuse? The difficulty that brought the Pentagon to my door was that the device I’d been asked to recover was fiercely radioactive, in a new way. According to monitoring instruments, something about the nature of the device and its complex interactions with pockets of material deep in the earth had produced radiation that could cause severe abnormalities in certain tissues of the brain. No way had been found to shield the brain from these deadly rays, which were apparently harmless to other tissues and organs of the body. So it had been decided that the person sent to recover the device should leave his brain behind. It would be kept in a sale place as there it could execute its normal control functions by elaborate radio links. Would I submit to a surgical procedure that would completely remove my brain, which would then be placed in a life-support system at the Manned Spacecraft Center in Houston? Each input and output pathway, as it was severed, would be restored by a pair of microminiaturized radio transceivers, one attached precisely to the brain, the other to the nerve stumps in the empty cranium. No information would be lost, all the connectivity would be preserved. At first I was a bit reluctant. Would it really work? The Houston brain surgeons encouraged me. “Think of it,” they said, “as a mere stretching of the nerves. If your brain were just moved over an inch in your skull, that would not alter or impair your mind. We’re simply going to make the nerves indefinitely elastic by splicing radio links into them.”

I was shown around the life-support lab in Houston and saw the sparkling new vat in which my brain would be placed, were I to agree. I met the large and brilliant support team of neurologists, hematologists, biophysicists, and electrical engineers, and after several days of discussions and demonstrations I agreed to give it a try. I was subjected to an enormous array of blood tests, brain scans, experiments, interviews, and the like. They took down my autobiography at great length, recorded tedious lists of my beliefs, hopes, fears, and tastes. They even listed my favorite stereo recordings and gave me a crash session of psychoanalysis.

The day for surgery arrived at last and of course I was anesthetized and remember nothing of the operation itself. When I came out of anesthesia, I opened my eyes, looked around, and asked the inevitable, the traditional, the lamentably hackneyed postoperative question: “Where am l?” The nurse smiled down at me. “You’re in Houston,” she said, and I reflected that this still had a good chance of being the truth one way or another. She handed me a mirror. Sure enough, there were the tiny antennae poling up through their titanium ports cemented into my skull. “I gather tile operation was a success,” I said. “I want to go see my brain.” They led me (I was a bit dizzy and unsteady) down a long corridor and into the life-support lab. A cheer went up from the assembled support team, and I responded with what I hoped was a jaunty salute. Still feeling lightheaded, I was helped over to tire life-support vat. I peered through the glass. There, floating in what looked like ginger ale, was undeniably a human brain, though it was almost covered with printed circuit chips, plastic tubules, electrodes, and other paraphernalia. “Is that mine?” I asked. “Hit the output transmitter switch there on the side of the vat and see for yourself,” the project director replied. I moved the switch to OFF, and immediately slumped, groggy and nauseated, into the arms of the technicians, one of whom kindly restored the switch to its ON position. While I recovered my equilibrium and composure, I thought to myself: “Well, here I am sitting on a folding chair, staring through a piece of plate glass at my own brain… But wait,” I said to myself, “shouldn’t I have thought, ‘Here I am, suspended in a bubbling fluid, being stared at by my own eyes’?” I tried to think this latter thought. I tried to project it into the tank, offering it hopefully to my brain, but I failed to carry off the exercise with any conviction. I tried again. “Here am I, Daniel Dennett, suspended in a bubbling fluid, being stared at by my own eyes.” No, it just didn’t work. Most puzzling and confusing. Being a philosopher of firm physicalist conviction, I believed unswervingly that the tokening of my thoughts was occurring somewhere in my brain: yet, when I thought “Here I am,” where the thought occurred to me was here, outside the vat, where I, Dennett, was standing staring at my brain.

I tried and tried to think myself into the vat, but to no avail. I tried to build up to the task by doing mental exercises. I thought to myself, “The sun is shining over there, ” five times in rapid succession, each time mentally ostending a different place: in order, the sunlit corner of the lab, the visible front lawn of the hospital, Houston, Mars, and Jupiter. I found I had little difficulty in getting my “there” ‘s to hop all over the celestial map with their proper references. I could loft a “there” in an instant through the farthest reaches of space, and then aim the next “there” with pinpoint accuracy at the upper left quadrant of a freckle on my arm. Why was I having such trouble with “here”? “Here in Houston” worked well enough, and so did “here in the lab,” and even “here in this part of the lab,” but “here in the vat” always seemed merely an unmeant mental mouthing. I tried closing my eyes while thinking it. This seemed to help, but still I couldn’t manage to pull it off, except perhaps for a fleeting instant. I couldn’t be sure. The discovery that I couldn’t be sure was also unsettling. How did I know where I meant by “here” when I thought “here”? Could I think I meant one place when in fact I meant another? I didn’t see how that could be admitted without untying the few bonds of intimacy between a person and his own mental life that had survived the onslaught of the brain scientists and philosophers, the physicalists and behaviorists. Perhaps I was incorrigible about where I meant when I said “here.” But in my present circumstances it seemed that either I was doomed by sheer force of mental habit to thinking systematically false indexical thoughts, or where a person is (and hence where his thoughts are tokened for purposes of semantic analysis) is not necessarily where his brain, the physical seat of his soul, resides. Nagged by confusion, I attempted to orient myself by falling back on a favorite philosopher’s ploy. I began naming things.

"Yorick," I said aloud to my brain, "you are my brain. The rest of my body, seated in this chair, I dub ‘Hamlet.’ " So here we all are: Yorick’s my brain, Hamlet’s my body, and I am Dennett. Avow, where am l? And when I think "where am l?" where’s that thought tokened? Is it tokened in my brain, lounging about in the vat, or right here between my ears where it seems to be tokened? Or nowhere? Its temporal coordinates give me no trouble; must it not have spatial coordinates as well? I began making a list of the alternatives.

1. Where Hamlet goes there goes Dennet. This principle was easily refuted by appeal to the familiar brain- transplant thought experiments so enjoyed by philosophers. If Tom and Dick switch brains, Tom is the fellow with Dick’s former body—just ask him; he’ll claim to be Tom and tell you the most intimate details of Tom’s autobiography. It was clear enough, then, that my current body and I could part company, but not likely that I could be separated from my brain. The rule of thumb that emerged so plainly from the thought experiments was that in a brain-transplant operation, one wanted to be the donor not the recipient. Better to call such an operation a body transplant, in fact. So perhaps the truth was,

2. Where Yorick goes there goes Dennett This was not at all appealing, however. How could I be in the vat and not about to go anywhere, when I was so obviously outside the vat looking in and beginning to make guilty plans to return to my room for a substantial lunch? This begged the question I realized, but it still seemed to be getting at something important. Casting about for some support for my intuition, I hit upon a legalistic sort of argument that might have appealed to Locke.

Suppose, I argued to myself, I were now to fly to California, rob a bank, and be apprehended. In which state would I be tried: in California, where the robbery took place, or in Texas, where the brains of the outfit were located? Would I be a California felon with an out- of- state brain, or a Texas felon remotely controlling an accomplice of sorts in California? It seemed possible that I might beat such a rap just on the undecidability of that jurisdictional question, though perhaps it would be deemed an interstate, and hence Federal, offense. In any event, suppose I were convicted. Was it likely that California would be satisfied to throw Hamlet into the brig, knowing that Yorick was living the good life and luxuriously taking the waters in Texas? Would Texas incarcerate Yorick, leaving Hamlet free to take the next boat to Rio? I his alternative appealed to me. Barring capital punishment or other cruel and unusual punishment, the state would be obliged to maintain the life- support system for Yorick though they might move him from Houston to Leavenworth, and aside from the unpleasantness of the opprobrium, 1, for one, would not mind at all and would consider myself a free man under those circumstances. If the state has an interest in forcibly relocating persons in institutions, it would fail to relocate file in any institution by locating Yorick there. If this were true, it suggested a third alternative.

3. Dennett is wherever he thinks he is. Generalized, the claim was as follows: At any given time a person has a point of view and the location of the point of view (which is determined internally by the content of the point of view) is also the location of the person.

Such a proposition is not without its perplexities, but to me it seemed a step in the right direction. The only trouble was that it seemed to place one in a heads- l- win/tails- you- lose situation of unlikely infallibility as regards location. Hadn’t I myself often been wrong about where I was, and at least as often uncertain? Couldn’t one get lost? Of course, but getting lost geographically is not the only way one might get lost. If one were lost in the woods one could attempt to reassure oneself with the consolation that at least one knew where one was: one was right here in the familiar surroundings of one’s own body. Perhaps in this case one would not have drawn one’s attention to much to be thankful for. Still, there were worse plights imaginable, and I wasn’t sure I wasn’t in such a plight right now.

Link: One of US

These are stimulating times for anyone interested in questions of animal consciousness. On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly. New animal behaviors and capacities are observed in the wild, often involving tool use—or at least object manipulation—the very kinds of activity that led the distinguished zoologist Donald R. Griffin to found the field of cognitive ethology (animal thinking) in 1978: octopuses piling stones in front of their hideyholes, to name one recent example; or dolphins fitting marine sponges to their beaks in order to dig for food on the seabed; or wasps using small stones to smooth the sand around their egg chambers, concealing them from predators. At the same time neurobiologists have been finding that the physical structures in our own brains most commonly held responsible for consciousness are not as rare in the animal kingdom as had been assumed. Indeed they are common. All of this work and discovery appeared to reach a kind of crescendo last summer, when an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”

That is technical language, but it speaks to a riddle age-old and instinctive. These thoughts begin, for most of us, typically, in childhood, when we are making eye contact with a pet or wild animal. I go back to our first family dog, a preternaturally intelligent-seeming Labrador mix, the kind of dog who herds playing children away from the street at birthday parties, an animal who could sense if you were down and would nuzzle against you for hours, as if actually sharing your pain. I can still hear people, guests and relatives, talking about how smart she was. “Smarter than some people I know!” But when you looked into her eyes—mahogany discs set back in the grizzled black of her face—what was there? I remember the question forming in my mind: can she think? The way my own brain felt to me, the sensation of existing inside a consciousness, was it like that in there?

For most of the history of our species, we seem to have assumed it was. Trying to recapture the thought life of prehistoric peoples is a game wise heads tend to leave alone, but if there’s a consistent motif in the artwork made between four thousand and forty thousand years ago, it’s animal-human hybrids, drawings and carvings and statuettes showing part man or woman and part something else—lion or bird or bear. Animals knew things, possessed their forms of wisdom. They were beings in a world of countless beings. Taking their lives was a meaningful act, to be prayed for beforehand and atoned for afterward, suggesting that beasts were allowed some kind of right. We used our power over them constantly and violently, but stopped short of telling ourselves that creatures of alien biology could not be sentient or that they were incapable of true suffering and pleasure. Needing their bodies, we killed them in spite of those things.

Only with the Greeks does there enter the notion of a formal divide between our species, our animal, and every other on earth. Today in Greece you can walk by a field and hear two farmers talking about an alogo, a horse. An a-logos. No logos, no language. That’s where one of their words for horse comes from. The animal has no speech; it has no reason. It has no reason because it has no speech. Plato and Aristotle were clear on that. Admire animals aesthetically, perhaps, or sentimentally; otherwise they’re here to be used. Mute equaled brute. As time went by, the word for speech became the very word for rationality, the logos, an identification taken up by the early Christians, with fateful results. For them the matter was even simpler. The animals lack souls. They are all animal, whereas we are part divine.

And yet, if you put aside church dogma, and lean in to look at the Bible itself, or at the Christian tradition, the picture is more complicated. In the Book of Isaiah, God says that the day will come when the beasts of the field will “honor” Him. If there’s a characteristic of personal identity more defining than the capacity to honor, it’s hard to come up with. We remember St. Francis, going aside to preach to the little birds, his “sisters.” Needless to say he represented a radical extreme, conclusions of which regarding the right way of being in the world would not seem reasonable to most of the people who have his statue in their gardens. In one of his salutations, that of virtues, he goes as far as to say that human beings desiring true holiness should make themselves “subject” to the animals, “and not to men alone, but also to all beasts.” If God grants that wild animals eat you, lie down, let them do “whatsoever they will,” it’s what He wanted.

Deeper than that, though, in the New Testament, in the Gospel According to Luke, there’s that exquisite verse, one of the most beautiful in the Bible, the one that says if God cares deeply about sparrows, don’t you think He cares about you? One is so accustomed to dwelling on the second, human, half of the equation, the comforting part, but when you put your hand over that and consider only the first, it’s a little startling: God cares deeply about the sparrows. Not just that, He cares about them individually. “Are not five sparrows sold for two pennies?” Jesus says. “Yet not one of them is forgotten in God’s sight.” Sparrows are an important animal for Jesus. In the so-calledInfancy Gospel of Thomas, a boy Jesus, playing in mud by the river, fashions twelve sparrows out of clay—again the number is mentioned—until a fellow Jew, happening to pass, rebukes him for breaking the Sabbath laws (against “smoothing,” perhaps), at which point Jesus claps and says, “Go!”, and the sparrows fly away chirping. They are not, He says, forgotten. So Godremembers them, bears them in mind. Stranger still, He cares about their deaths. In the Gospel According to Matthew we’re told, “Not one of them will fall to the ground apart from your Father.” Think about that. If the bird dies on the branch, and the bird has no immortal soul, and is from that moment only inanimate matter, already basically dust, how can it be “with” God as it’s falling? And not in some abstract all-of-creation sense but in the very way that we are with Him, the explicit point of the verse: the line right before it is “fear not them which kill the body, but are not able to kill the soul.” If sparrows lack souls, if the logos liveth not in them, Jesus isn’t making any sense in Matthew 10:28-29. The passage may make no sense anyway. The sparrow population shows little sign of divine ministrations: two years ago the Royal Society for the Protection of Birds placed house sparrows on its “Red List” of globally threatened species. Charles Darwin supposedly said that the suffering of the lower animals throughout time was more than he could bear to think of. That feels, if slightly neurotic, more scrupulously observed.

The modern conversation on animal consciousness proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata—windup toys, like the Renaissance protorobots he’d seen as a boy in the gardens at Saint-Germain-en-Laye, “hydraulic statues” that moved and made music and even appeared to speak as they sprinkled the plants. This is how it was with animals, Descartes held. We look at them—they seem so full of depth, so like us, but it’s an illusion. Everything they do can be attached by causal chain to some process, some natural event. Picture two kittens next to each other, watching a cat toy fly around, their heads making precisely the same movements at precisely the same time, as if choreographed, two little fleshy machines made of nerves and electricity, obeying their mechanical mandate.

Descartes’ view drew immediate controversy. Writers such as the naturalist John Ray, in The Wisdom of God Manifested in the Works of the Creation(1691), protested on behalf of “the common sense of mankind” that if “beasts were automata or machines, they could have no sense, or perception of pleasure, or pain…which is contrary to the doleful significations they make when beaten, or tormented.” A view with which most of us can sympathize, but one that rests to a regrettable extent on naked anthropomorphism—their screams sound like ours, and so must mean the same thing.

Link: Power of Suggestion

The amazing influence of unconscious cues is among the most fascinating discoveries of our time­—that is, if it’s true.

…. As in so many other famous psychological experiments, the researcher lies to the subject. After rearranging lists of words into sensible sentences, the subject—a New York University undergraduate—is told that the experiment is about language ability. It is not. In fact, the real test doesn’t begin until the subject exits the room. In the hallway is a graduate student with a stopwatch hidden beneath her coat. She’s pretending to wait for a meeting but really she’s working with the researchers. She times how long it takes the subject to walk from the doorway to a strip of silver tape a little more than 30 feet down the corridor. The experiment hinges on that stopwatch.

The words the subject was asked to rearrange were not random, though they seemed that way (this was confirmed in postexperiment interviews with each subject). They were words like “bingo” and “Florida,” “knits” and “wrinkles,” “bitter” and “alone.” Reading the list, you can almost picture a stooped senior padding around a condo, complaining at the television. A control group unscrambled words that evoked no theme. When the walking times of the two groups were compared, the Florida-knits-alone subjects walked, on average, more slowly than the control group. Words on a page made them act old.

It’s a cute finding. But the more you think about it, the more serious it starts to seem. What if we are constantly being influenced by subtle, unnoticed cues? If “Florida” makes you sluggish, could “cheetah” make you fleet of foot? Forget walking speeds. Is our environment making us meaner or more creative or stupider without our realizing it? We like to think we’re steering the ship of self, but what if we’re actually getting blown about by ghostly gusts?

John Bargh and his co-authors, Mark Chen and Lara Burrows, performed that experiment in 1990 or 1991. They didn’t publish it until 1996. Why sit on such a fascinating result? For starters, they wanted to do it again, which they did. They also wanted to perform similar experiments with different cues. One of those other experiments tested subjects to see if they were more hostile when primed with an African-American face. They were. (The subjects were not African-American.) In the other experiment, the subjects were primed with rude words to see if that would make them more likely to interrupt a conversation. It did.

The researchers waited to publish until other labs had found the same type of results. They knew their finding would be controversial. They knew many people wouldn’t believe it. They were willing to stick their necks out, but they didn’t want to be the only ones.

Since that study was published in the Journal of Personality and Social Psychology, it has been cited more than 2,000 times. Though other researchers did similar work at around the same time, and even before, it was that paper that sparked the priming era. Its authors knew, even before it was published, that the paper was likely to catch fire. They wrote: “The implications for many social psychological phenomena … would appear to be considerable.” Translation: This is a huge deal.



Awakening
Since its introduction in 1846, anesthesia has allowed for medical miracles. Limbs can be removed, tumors examined, organs replaced—and a patient will feel and remember nothing. Or so we choose to believe. In reality, tens of thousands of patients each year in the United States alone wake up at some point during surgery. Since their eyes are taped shut and their bodies are usually paralyzed, they cannot alert anyone to their condition. In efforts to eradicate this phenomenon, medicine has been forced to confront how little we really know about anesthesia’s effects on the brain. The doctor who may be closest to a solution may also answer a question that has confounded centuries’ worth of scientists and philosophers: What does it mean to be conscious?
… This experience is called “intraoperative recall” or “anesthesia awareness,” and it’s more common than you might think. Although studies diverge, most experts estimate that for every 1,000 patients who undergo general anesthesia each year in the United States, one to two will experience awareness. Patients who awake hear surgeons’ small talk, the swish and stretch of organs, the suctioning of blood; they feel the probing of fingers, the yanks and tugs on innards; they smell cauterized flesh and singed hair. But because one of the first steps of surgery is to tape patients’ eyes shut, they can’t see. And because another common step is to paralyze patients to prevent muscle twitching, they have no way to alert doctors that they are awake.
Many of these cases are benign: vague, hazy flashbacks. But up to 70 percent of patients who experience awareness suffer long-term psychological distress, including PTSD—a rate five times higher than that of soldiers returning from Iraq and Afghanistan. Campbell now understands that this is what happened to her, although she didn’t believe it at first. “The whole idea of anesthesia awareness seemed over-the-top,” she told me. “It took years to begin to say, ‘I think this is what happened to me.’ ” She describes her memories of the surgery like those from a car accident: the moments before and after are clear, but the actual event is a shadowy blur of emotion. She searched online for people with similar experiences, found a coalition of victims, and eventually traveled up the East Coast to speak with some of them. They all shared a constellation of symptoms: nightmares, fear of confinement, the inability to lie flat (many sleep in chairs), and a sense of having died and returned to life. Campbell (whose name and certain other identifying details have been changed) struggles especially with the knowledge that there is no way for her to prove that she woke up, and that many, if not most, people might not believe her. “Anesthesia awareness is an intrapersonal event,” she says. “No one else sees it. No one else knows it. You’re the only one.”
Sizemore complained of being unable to breathe and claimed that people were trying to bury him alive. He suffered from insomnia; when he could sleep, he had vivid nightmares.
In most cases of awareness, patients are awake but still dulled to pain. But that was not the case for Sherman Sizemore Jr., a Baptist minister and former coal miner who was 73 when he underwent an exploratory laparotomy in early 2006 to pinpoint the cause of recurring abdominal pain. In this type of procedure, surgeons methodically explore a patient’s viscera for evidence of abnormalities. Although there are no official accounts of Sizemore’s experience, his family maintained in a lawsuit that he was awake—and feeling pain—throughout the surgery. (The suit was settled in 2008.) He reportedly emerged from the operation behaving strangely. He was afraid to be left alone. He complained of being unable to breathe and claimed that people were trying to bury him alive. He refused to be around his grandchildren. He suffered from insomnia; when he could sleep, he had vivid nightmares.
The lawsuit claimed that Sizemore was tormented by doubt, wondering whether he had imagined the horrific pain. No one advised Sizemore to seek psychiatric help, his family alleged, and no one mentioned the fact that many patients who experience awareness suffer from PTSD. On February 2, 2006, two weeks after his surgery, Sizemore shot himself. He had no history of psychiatric illness.

Awakening

Since its introduction in 1846, anesthesia has allowed for medical miracles. Limbs can be removed, tumors examined, organs replaced—and a patient will feel and remember nothing. Or so we choose to believe. In reality, tens of thousands of patients each year in the United States alone wake up at some point during surgery. Since their eyes are taped shut and their bodies are usually paralyzed, they cannot alert anyone to their condition. In efforts to eradicate this phenomenon, medicine has been forced to confront how little we really know about anesthesia’s effects on the brain. The doctor who may be closest to a solution may also answer a question that has confounded centuries’ worth of scientists and philosophers: What does it mean to be conscious?

… This experience is called “intraoperative recall” or “anesthesia awareness,” and it’s more common than you might think. Although studies diverge, most experts estimate that for every 1,000 patients who undergo general anesthesia each year in the United States, one to two will experience awareness. Patients who awake hear surgeons’ small talk, the swish and stretch of organs, the suctioning of blood; they feel the probing of fingers, the yanks and tugs on innards; they smell cauterized flesh and singed hair. But because one of the first steps of surgery is to tape patients’ eyes shut, they can’t see. And because another common step is to paralyze patients to prevent muscle twitching, they have no way to alert doctors that they are awake.

Many of these cases are benign: vague, hazy flashbacks. But up to 70 percent of patients who experience awareness suffer long-term psychological distress, including PTSD—a rate five times higher than that of soldiers returning from Iraq and Afghanistan. Campbell now understands that this is what happened to her, although she didn’t believe it at first. “The whole idea of anesthesia awareness seemed over-the-top,” she told me. “It took years to begin to say, ‘I think this is what happened to me.’ ” She describes her memories of the surgery like those from a car accident: the moments before and after are clear, but the actual event is a shadowy blur of emotion. She searched online for people with similar experiences, found a coalition of victims, and eventually traveled up the East Coast to speak with some of them. They all shared a constellation of symptoms: nightmares, fear of confinement, the inability to lie flat (many sleep in chairs), and a sense of having died and returned to life. Campbell (whose name and certain other identifying details have been changed) struggles especially with the knowledge that there is no way for her to prove that she woke up, and that many, if not most, people might not believe her. “Anesthesia awareness is an intrapersonal event,” she says. “No one else sees it. No one else knows it. You’re the only one.”

Sizemore complained of being unable to breathe and claimed that people were trying to bury him alive. He suffered from insomnia; when he could sleep, he had vivid nightmares.

In most cases of awareness, patients are awake but still dulled to pain. But that was not the case for Sherman Sizemore Jr., a Baptist minister and former coal miner who was 73 when he underwent an exploratory laparotomy in early 2006 to pinpoint the cause of recurring abdominal pain. In this type of procedure, surgeons methodically explore a patient’s viscera for evidence of abnormalities. Although there are no official accounts of Sizemore’s experience, his family maintained in a lawsuit that he was awake—and feeling pain—throughout the surgery. (The suit was settled in 2008.) He reportedly emerged from the operation behaving strangely. He was afraid to be left alone. He complained of being unable to breathe and claimed that people were trying to bury him alive. He refused to be around his grandchildren. He suffered from insomnia; when he could sleep, he had vivid nightmares.

The lawsuit claimed that Sizemore was tormented by doubt, wondering whether he had imagined the horrific pain. No one advised Sizemore to seek psychiatric help, his family alleged, and no one mentioned the fact that many patients who experience awareness suffer from PTSD. On February 2, 2006, two weeks after his surgery, Sizemore shot himself. He had no history of psychiatric illness.

Link: Mimetic Desire and the Scapegoat Mechanism

“Negative identity is a phenomenon whereby you define yourself by what you are not. This has enormous advantages, especially in terms of the hardening of psychological boundaries and the fortification of the ego: one can mobilize a great deal of energy on this basis and the new nation [the US] certainly did… . The downside … is that this way of generating an identity for yourself can never tell you who you actually are, in the affirmative sense. It leaves, in short, an emptiness at the center, such that you always have to be in opposition to something, or even at war with someone or something, in order to feel real.” —Morris Berman, A Question of Values

There is a little-known French philosopher called René Girard who has been quietly working away at a social theory that, if correct, has the potential to overturn everything we think we know about ourselves and the world we live in. In outlining his theory of mimetic desire, mimetic rivalry, and what he calls “the scapegoat mechanism,” Gerard argues persuasively how sacrificial violence is the dark secret underpinning all human cultures. The scapegoat mechanism is the means by which a group transfers its collective hostility onto a single victim, discharging it and returning the group to unity. As I’ve tried to outline above, America’s and other dominant groups’ penchant for scapegoating is hardly a secret; but Girard repositions it from being a cultural artifact to being the cultural artifact.

The problem which scapegoating solves is what Girard terms mimesis: an unconscious form of imitation that invariably leads to competition. Girard describes desire as the most virulent “mimetic pathogen.” This idea was simply stated, as long ago as 1651, in Thomas Hobbes’ Leviathan: “if any two men desire the same thing, which nevertheless they cannot both enjoy, they become enemies.” We can see this easily enough at the microcosmic level. If two people share an affinity for each other, they make friends and share their common interests. The problem, Girard writes, is that this very affinity will eventually lead them to desire the same thing and end up as rivals. Two best friends fall for the same woman; the affinity quickly turns to antipathy and they end up murdering each other to prove whose desire is stronger. An even more common example is when two children are playing with toys: one picks up a toy and instantly the other wants to play with it. A previously harmonious arrangement quickly dissolves into anger and tears. Mimesis is like an endless dance of unconscious imitation in which people find themselves desiring things because they are desired by someone else. “Keeping up with the Joneses”: mimetic desire aroused not by the object itself but by the desire of others for the object. Competition becomes its own end, and the object of desire becomes irrelevant as previously civil neighbors become consumed by rivalry. They are now locked into a “negative identity” in which each needs the other in order to feel real. This idea is popular in movies, such as “cop hunts killer” doppelganger narratives, and in comic book characters like Batman and the Joker — opposite sides of a single coin, strengthening and justifying each other through opposition. It is also seen everywhere we look, only not quite so starkly drawn.

Girard’s theory extends this model to encompass (and explain) entire societies. It argues that, without the release provided by sacrificial violence, mimetic desire leads inevitably to mimetic rivalry and will finally culminate in mimetic violence. Humans are so highly imitative that, without the scapegoat mechanism, violent outbreaks within any social group will spread like wildfire and decimate the whole group. If two people desire the same thing, their desire will soon spread to a third, a fourth, and so on. Once the object is forgotten, mimetic rivalry snowballs into widespread antagonism. The final stage of the crisis is when the antagonists no longer imitate each other’s desires for an object, but each other’s antagonism. Think of Rwanda.

Link: Moving Through the Waters of Human Attention

The New Yorker has an amazing article on pickpocket and illusionist Apollo Robbins that is packed with gems about attention, misdirection and sleight-of-hand. Robbins is a self-taught but dedicated aficionado of human consciousness and has learnt the many ways in which our attention can be manipulated. The article discusses how Robbins does many of his pickpocketing techniques but also discusses how he got into the business and how he has begun collaborating with cognitive scientists to help us understand scientifically what he has learnt artistically.

Robbins uses various metaphors to describe how he works with attention, talking about “surfing attention,” “carving up the attentional pie,” and “framing.” “I use framing the way a movie director or a cinematographer would,” he said. “If I lean my face close in to someone’s, like this”—he demonstrated—“it’s like a closeup. All their attention is on my face, and their pockets, especially the ones on their lower body, are out of the frame. Or if I want to move their attention off their jacket pocket, I can say, ‘You had a wallet in your back pocket—is it still there?’ Now their focus is on their back pocket, or their brain just short-circuits for a second, and I’m free to steal from their jacket.”

In fact, he jointly published a scientific study in 2011 based on his discovery that when something starts moving in a straight line people tend to look back to the origin of the movements, but if something moves in a curve they stay fixed on the object.

If you want to see Robbins in action, and it really is astounding, you can catch him in various videos on YouTube.

There’s even one where he explains how he does it in terms of the neuroscience of attention which is particularly good.

But don’t miss this New Yorker article, it’s both an entertaining and informative guide to a master of human attentional blind spots.

Click here to read the New Yorker article ‘A Pickpocket’s Tale’.

“An intellectual? Yes. And never deny it. An intellectual is someone whose mind watches itself. I like this, because I am happy to be both halves, the watcher and the watched. “Can they be brought together?” This is a practical question. We must get down to it. “I despise intelligence” really means: “I cannot bear my doubts.”
— Albert Camus

(Source: hyggeligs, via faulknerandfieldnotes)

Link: Meet Your Mind: A User's Guide to the Science of Consciousness

Your thoughts and feelings, your joy and sorrow….it’s all part of your identity, of your consciousness. But what exactly is consciousness? It may be the biggest mystery left in science. And for a radio show that loves ‘Big Ideas,’ we had to take up the question.  

In our six-hour series, you’ll hear interviews with the world’s leading experts - neuroscientists, cognitive psychologists, philosophers, writers and artists. We’ll take you inside the brains of Buddhist monks, and across the ocean to visit France’s ancient cave paintings. We’ll tell you how to build a memory palace, and you’ll meet one of the first scientists to study the effects of LSD.

How do our brains work?  Are animals conscious? What about computers?  Will we ever crack the mystery of how the physical “stuff” of our brains produces mental experiences?

Mind and Brain: Neuroscientists have made remarkable discoveries about the brain, but we’re not close to cracking the mystery of consciousness.  How does a tangle of neurons inside your skull produce…you?

Memory and Forgetting: We explore the new science of memory and forgetting, how to build a memory palace, and how to erase a thought.

Wiring the Brain: Scientists are trying to develop a detailed map of the human brain.  For some scientists, the goal isn’t just to map the brain; it’s to crack the mystery of consciousness.

The Creative Brain: Creativity is a little like obscenity:  You know it when you see it, but you can’t exactly define it….unless you’re a neuroscientist.  In labs around the country, a new generation of scientists tackles the mystery of human creativity.

Extraordinary Minds: Certain brain disorders can lead to remarkable insights….even genius.  We’ll peer into the world of autistic savants and dyslexics, and contemplate our cyborg future, when our brains merge with tiny, embedded computers.

Higher Consciousness: Suppose neuroscientists map the billions of neural circuits in the human brain….are we any closer to cracking the great existential mysteries - like meaning, purpose or happiness?  Scientists and spiritual thinkers are now working together to create a new science of mindfulness.

The persistence of instinctive life in the guise of human intelligence is one of my most constant and profound contemplations. The artificial disguise of consciousness only highlights for me the unconsciousness it doesn’t succeed in disguising. From birth to death, man is the slave of the same external dimension that rules animals. Throughout his life he doesn’t live, he vegetatively thrives, with greater intensity and complexity than an animal. He’s guided by norms without knowing that they guide him or even that they exist, and all his ideas, feelings and acts are unconscious—not because there’s no consciousness in them but because there aren’t two consciousnesses. Flashes of awareness that we live an illusion—that, and no more, is what distinguishes the greatest of men.
— Fernando Pessoa, The Book of Disquiet
The ordinary man, however hard his life may be, at least has the pleasure of not thinking about it. To take life as it comes, living it externally like a cat or a dog – that is how people in general live, and that is how life should be lived, if we would have the contentment of the cat or dog. To think is to destroy. Thought itself is destroyed in the process of thinking, because to think is to decompose. If men knew how to meditate on the mystery of life, if they knew how to feel the thousand complexities which spy on the soul in every single detail of action, then they would never act – they wouldn’t even live. They would kill themselves from fright, like those who commit suicide to avoid being guillotined the next day.
— Fernando Pessoa, The Book of Disquiet

Fernando Pessoa, The Book of Disquiet, 317

One of my constant preoccupations is to understand how other people can exist, how there can be souls that aren’t mine, consciousnesses that have nothing to do with my own, which—because it’s a consciousness—seems to me like the only one. I accept that the man standing before me, who speaks with words like mine and gesticulates as I do or could do, is in some sense my fellow creature. But so are the figures from illustrations that fill my imagination, the characters I meet in novels, and the dramatic personae that move on stage through the actors who represent them.

No one, I suppose, genuinely admits the real existence of another person. We may concede that the person is alive and that he thinks and feels as we do, but there will always be an unnamed element of difference, a materialized inequality. There are figures from the past and living images from books that are more real to us than the incarnate indifferences that talk to us over shop counters, or happen to glance at us in the trams, or brush against us in the dead happenstance of the streets. Most people are no more for us than scenery, generally the invisible scenery of a street we know by heart.

I feel more kinship and intimacy with certain characters described in books and certain images I’ve seen in prints than I feel with many so-called real people, who are of that metaphysical insignificance known as flesh and blood. And ‘flesh and blood’ in fact describes them rather well: they’re like chunks of meat displayed in the window of a butcher’s, dead things bleeding as if they were alive, shanks and cutlets of Destiny.

I’m not ashamed of feeling this way, as I’ve discovered that’s how everyone feels. What seems to lie behind people’s mutual contempt and indifference, such that they can kill each other like assassins who don’t really feel they’re killing, or like soldiers who don’t think about what they’re doing, is that no one pays heed to the apparently abstruse fact that other people are also living souls.

On certain days, in certain moments, brought to me by I don’t know what breeze and opened to me by the opening of I don’t know what door, I suddenly feel that the corner grocer is a thinking entity, that his assistant, who at this moment is bent over a sack of potatoes next to the entrance, is truly a soul capable of suffering.

When I was told yesterday that the employee of the tobacco shop had committed suicide, it seemed like a lie. Poor man, he also existed! We had forgotten this, all of us, all who knew him in the same way as all those who never met him. Tomorrow we’ll forget him even better. But he evidently had a soul, for he killed himself. Passion? Anxiety? No doubt… But for me, as for all humanity, there’s only the memory of a dumb smile and a shabby sports coat that hung unevenly from the shoulders. That’s all that remains to me of this man who felt so much that he killed himself for feeling, since what else does one kill himself for? Once, as I was buying cigarettes from him, it occurred to me that he would go bald early. As it turns out, he didn’t have time enough to go bald. That’s one of the memories I have of him. What other one can I have, if even this one is not of him but of one of my thoughts?

I suddenly see his corpse, the coffin where they placed him, the so alien grave where they must have lowered him, and it dawns on me that the cashier of the tobacco shop, with crooked coat and all, was in a certain way the whole of humanity.

It was only a flash. What’s clear to me now, today, as the human being I am, is that he died. That’s all.

Fernando Pessoa, The Book of Disquiet, 338

I’ve always worried, in those occasional moments of detachment when we become conscious of ourselves as individuals who are seen as ‘others’ by other people, about the physical and even moral impression I must make on those who observe me and talk to me, whether on a daily basis or in a chance meeting.

We’re all used to thinking of ourselves as primarily mental realities, and of other people as immediately physical realities. We vaguely see ourselves as physical people, in so far as we consider how we look to others. And we vaguely see others as mental realities, though only when we’re in love or in conflict does it really dawn on us that they, like we, are predominantly soul.

And so sometimes I lose myself in futile speculations about the sort of person I am in the eyes of others: how my voice sounds, what kind of impression I leave in their involuntary memory, how my gestures, my words and my visible life are inscribed on the retinas of their interpretation. I’ve never succeeded in seeing myself from the outside. No mirror can show us ourself from outside, because no mirror can take us out of ourself. We would need  a different soul, a different way of looking and thinking. If I were an actor projected on a screen, or if I recorded my voice on records, I’m certain that I still wouldn’t know what I am on the outside, because like it or not, and no matter what I might record of myself, I’m always here inside, enclosed by high walls, on the private estate of my consciousness of me.

I don’t know if others are like me, or if the science of life consists essentially in being so alienated from oneself that this alienation becomes second nature, such that one can participate in life as an exile from his own consciousness. Or perhaps other people, even more self-absorbed than I, are completely given over to the brutishness of being only themselves, living outwardly by the same miracle that enables bees to form societies more highly organized than any nation and allows ants to communicate with a language of tiny antennae whose results surpass our complex system of mutual understanding.

The geography of our consciousness of reality is an endless complexity of irregular coasts, low and high mountains, and myriad lakes. And if I ponder too mich, I see it all as a kind of map, like that of the Pays du Tendre or of Gulliver’s Travels, a fantasy of exactitude inscribed in an ironic or fanciful book for the amusement of superior beings, who know where countries are really countries.

Everything is complex for those who think, and no doubt thought itself takes delight in making things yet more complex. But those who think need to justify their abdication with a vast programme of understanding, which they set forth—like liars their explanations—with heaps of exaggerated detail that eventually reveal, once the earth is swept away, the lying root.

Everything is complex, or I’m the one who’s complex. But at any rate it doesn’t matter, because at any rate nothing matters. All of this, all these considerations that have strayed off the broad highway, vegetate in the gardens of excluded gods like climbing plants detached from their walls. And on this night as I conclude these inconclusive considerations, I smile at the vital irony which makes them appear in a human soul that has already, even before there were stars, an orphan of Fate’s grand purposes.