Sunshine Recorder

Link: Reflections on Free Will

Daniel Dennett reviews Sam Harris’s book Free Will, calling it a “museum of mistakes.”

Sam Harris’s Free Will (2012) is a remarkable little book, engagingly written and jargon-free, appealing to reason, not authority, and written with passion and moral seriousness. This is not an ivory tower technical inquiry; it is in effect a political tract, designed to persuade us all to abandon what he considers to be a morally pernicious idea: the idea of free will. If you are one of the many who have been brainwashed into believing that you have—or rather, are—an (immortal, immaterial) soul who makes all your decisions independently of the causes impinging on your material body and especially your brain, then this is the book for you. Or, if you have dismissed dualism but think that what you are is a conscious (but material) ego, a witness that inhabits a nook in your brain and chooses, independently of external causation, all your voluntary acts, again, this book is for you. It is a fine “antidote,” as Paul Bloom says, to this incoherent and socially malignant illusion. The incoherence of the illusion has been demonstrated time and again in rather technical work by philosophers (in spite of still finding supporters in the profession), but Harris does a fine job of making this apparently unpalatable fact accessible to lay people. Its malignance is due to its fostering the idea of Absolute Responsibility, with its attendant implications of what we might call Guilt-in-the- eyes-of-God for the unfortunate sinners amongst us and, for the fortunate, the arrogant and self-deluded idea of Ultimate Authorship of the good we do. We take too much blame, and too much credit, Harris argues. We, and the rest of the world, would be a lot better off if we took ourselves—our selves—less seriously. We don’t have the kind of free will that would ground such Absolute Responsibility for either the harm or the good we cause in our lives.

All this is laudable and right, and vividly presented, and Harris does a particularly good job getting readers to introspect on their own decision-making and notice that it just does not conform to the fantasies of this all too traditional understanding of how we think and act. But some of us have long recognized these points and gone on to adopt more reasonable, more empirically sound, models of decision and thought, and we think we can articulate and defend a more sophisticated model of free will that is not only consistent with neuroscience and introspection but also grounds a (modified, toned-down, non-Absolute) variety of responsibility that justifies both praise and blame, reward and punishment. We don’t think this variety of free will is an illusion at all, but rather a robust feature of our psychology and a reliable part of the foundations of morality, law and society. Harris, we think, is throwing out the baby with the bathwater.

He is not alone among scientists in coming to the conclusion that the ancient idea of free will is not just confused but also a major obstacle to social reform. His brief essay is, however, the most sustained attempt to develop this theme, which can also be found in remarks and essays by such heavyweight scientists as the neuroscientists Wolf Singer and Chris Frith, the psychologists Steven Pinker and Paul Bloom, the physicists Stephen Hawking and Albert Einstein, and the evolutionary biologists Jerry Coyne and (when he’s not thinking carefully) Richard Dawkins.

The book is, thus, valuable as a compact and compelling expression of an opinion widely shared by eminent scientists these days. It is also valuable, as I will show, as a veritable museum of mistakes, none of them new and all of them seductive—alluring enough to lull the critical faculties of this host of brilliant thinkers who do not make a profession of thinking about free will. And, to be sure, these mistakes have also been made, sometimes for centuries, by philosophers themselves. But I think we have made some progress in philosophy of late, and Harris and others need to do their homework if they want to engage with the best thought on the topic.

I am not being disingenuous when I say this museum of mistakes is valuable; I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves. I have always suspected that many who hold this hard determinist view are making these mistakes, but we mustn’t put words in people’s mouths, and now Harris has done us a great service by articulating the points explicitly, and the chorus of approval he has received from scientists goes a long way to confirming that they have been making these mistakes all along. Wolfgang Pauli’s famous dismissal of another physicist’s work as “not even wrong” reminds us of the value of crystallizing an ambient cloud of hunches into something that can be shown to be wrong. Correcting widespread misunderstanding is usually the work of many hands, and Harris has made a significant contribution.

The first parting of opinion on free will is between compatibilists and incompatibilists. The latter say (with “common sense” and a tradition going back more than two millennia) that free will is incompatible with determinism, the scientific thesis that there are causes for everything that happens. Incompatibilists hold that unless there are “random swerves”that disrupt the iron chains of physical causation, none of our decisions or choices can be truly free. Being caused means not being free—what could be more obvious? The compatibilists deny this; they have argued, for centuries if not millennia, that once you understand what free will really is (and must be, to sustain our sense of moral responsibility), you will see that free will can live comfortably with determinism—if determinism is what science eventually settles on.

Incompatibilists thus tend to pin their hopes on indeterminism, and hence were much cheered by the emergence of quantum indeterminism in 20th century physics. Perhaps the brain can avail itself of undetermined quantum swerves at the sub-atomic level, and thus escape the shackles of physical law! Or perhaps there is some other way our choices could be truly undetermined. Some have gone so far as to posit an otherwise unknown (and almost entirely unanalyzable) phenomenon called agent causation, in which free choices are caused somehow by an agent, but not by any event in the agent’s history. One exponent of this position, Roderick Chisholm, candidly acknowledged that on this view every free choice is “a little miracle”—which makes it clear enough why this is a school of thought endorsed primarily by deeply religious philosophers and shunned by almost everyone else. Incompatibilists who think we have free will, and therefore determinism must be false, are known as libertarians (which has nothing to do with the political view of the same name). Incompatibilists who think that all human choices are determined by prior events in their brains (which were themselves no doubt determined by chains of events arising out of the distant past) conclude from this that we can’t have free will, and, hence, are not responsible for our actions.

This concern for varieties of indeterminism is misplaced, argue the compatibilists: free will is a phenomenon that requires neither determinism nor indeterminism; the solution to the problem of free will lies in realizing this, not banking on the quantum physicists to come through with the right physics—or a miracle. Compatibilism may seem incredible on its face, or desperately contrived, some kind of a trick with words, but not to philosophers. Compatibilism is the reigning view among philosophers (just over 59%, according to the 2009 Philpapers survey) with libertarians coming second with 13% and hard determinists only 12%. It is striking, then, that all the scientists just cited have landed on the position rejected by almost nine out of ten philosophers, but not so surprising when one considers that these scientists hardly ever consider the compatibilist view or the reasons in its favor.

Harris has considered compatibilism, at least cursorily, and his opinion of it is breathtakingly dismissive: After acknowledging that it is the prevailing view among philosophers (including his friend Daniel Dennett), he asserts that “More than in any other area of academic philosophy, the result resembles theology.” This is a low blow, and worse follows: “From both a moral and a scientific perspective, this seems deliberately obtuse.” (18) I would hope that Harris would pause at this point to wonder—just wonder—whether maybe his philosophical colleagues had seen some points that had somehow escaped him in his canvassing of compatibilism. As I tell my undergraduate students, whenever they encounter in their required reading a claim or argument that seems just plain stupid, they should probably double check to make sure they are not misreading the “preposterous” passage in question. It is possible that they have uncovered a howling error that has somehow gone unnoticed by the profession for generations, but not very likely. In this instance, the chances that Harris has underestimated and misinterpreted compatibilism seem particularly good, since the points he defends later in the book agree right down the line with compatibilism; he himself is a compatibilist in everything but name!

Seriously, his main objection to compatibilism, issued several times, is that what compatibilists mean by “free will” is not what everyday folk mean by “free will.” Everyday folk mean something demonstrably preposterous, but Harris sees the effort by compatibilists to make the folks’ hopeless concept of free will presentable as somehow disingenuous, unmotivated spin-doctoring, not the project of sympathetic reconstruction the compatibilists take themselves to be engaged in. So it all comes down to who gets to decide how to use the term “free will.” Harris is a compatibilist about moral responsibility and the importance of the distinction between voluntary and involuntary actions, but he is not a compatibilist about free will since he thinks “free will” has to be given the incoherent sense that emerges from uncritical reflection by everyday folk. He sees quite well that compatibilism is “the only philosophically respectable way to endorse free will” (p16) but adds:

However, the ‘free will’ that compatibilists defend is not the free will that most people feel they have. (p16).

First of all, he doesn’t know this. This is a guess, and suitably expressed questionnaires might well prove him wrong. That is an empirical question, and a thoughtful pioneering attempt to answer it suggests that Harris’s guess is simply mistaken.2 The newly emerging field of experimental philosophy (or “X-phi”) has a rather unprepossessing track record to date, but these are early days, and some of the work has yielded interesting results that certainly defy complacent assumptions common among philosophers. The study by Nahmias et al. 2005 found substantial majorities (between 60 and 80%) in agreement with propositions that are compatibilist in outlook, not incompatibilist.

Harris’s claim that the folk are mostly incompatibilists is thus dubious on its face, and even if it is true, maybe all this shows is that most people are suffering from a sort of illusion that could be replaced by wisdom. After all, most people used to believe the sun went around the earth. They were wrong, and it took some heavy lifting to convince them of this. Maybe this factoid is a reflection on how much work science and philosophy still have to do to give everyday laypeople a sound concept of free will. We’ve not yet succeeded in getting them to see the difference between weight and mass, and Einsteinian relativity still eludes most people. When we found out that the sun does not revolve around the earth, we didn’t then insist that there is no such thing as the sun (because what the folk mean by “sun” is “that bright thing that goes around the earth”). Now that we understand what sunsets are, we don’t call them illusions. They are real phenomena that can mislead the naive. 

Link: Life as a Nonviolent Psychopath

In 2005, James Fallon’s life started to resemble the plot of a well-honed joke or big-screen thriller: A neuroscientist is working in his laboratory one day when he thinks he has stumbled upon a big mistake. He is researching Alzheimer’s and using his healthy family members’ brain scans as a control, while simultaneously reviewing the fMRIs of murderous psychopaths for a side project. It appears, though, that one of the killers’ scans has been shuffled into the wrong batch.

The scans are anonymously labeled, so the researcher has a technician break the code to identify the individual in his family, and place his or her scan in its proper place. When he sees the results, however, Fallon immediately orders the technician to double check the code. But no mistake has been made: The brain scan that mirrors those of the psychopaths is his own.

After discovering that he had the brain of a psychopath, Fallon delved into his family tree and spoke with experts, colleagues, relatives, and friends to see if his behavior matched up with the imaging in front of him. He not only learned that few people were surprised at the outcome, but that the boundary separating him from dangerous criminals was less determinate than he presumed. Fallon wrote about his research and findings in the book The Psychopath Inside: A Neuroscientist’s Personal Journey Into the Dark Side of the Brain, and we spoke about the idea of nature versus nurture, and what—if anything—can be done for people whose biology might betray their behavior.


One of the first things you talk about in your book is the often unrealistic or ridiculous ways that psychopaths are portrayed in film and television. Why did you decide to share your story and risk being lumped in with all of that?

I’m a basic neuroscientist—stem cells, growth factors, imaging genetics—that sort of thing. When I found out about my scan, I kind of let it go after I saw that the rest of my family’s were quite normal. I was worried about Alzheimer’s, especially along my wife’s side, and we were concerned about our kids and grandkids. Then my lab was busy doing gene discovery for schizophrenia and Alzheimer’s and launching a biotech start-up from our research on adult stem cells. We won an award and I was so involved with other things that I didn’t actually look at my results for a couple of years.

This personal experience really had me look into a field that I was only tangentially related to, and burnished into my mind the importance of genes and the environment on a molecular level. For specific genes, those interactions can really explain behavior. And what is hidden under my personal story is a discussion about the effect of bullying, abuse, and street violence on kids.

You used to believe that people were roughly 80 percent the result of genetics, and 20 percent the result of their environment. How did this discovery cause a shift in your thinking?

I went into this with the bias of a scientist who believed, for many years, that genetics were very, very dominant in who people are—that your genes would tell you who you were going to be. It’s not that I no longer think that biology, which includes genetics, is a major determinant; I just never knew how profoundly an early environment could affect somebody.

While I was writing this book, my mother started to tell me more things about myself. She said she had never told me or my father how weird I was at certain points in my youth, even though I was a happy-go-lucky kind of kid. And as I was growing up, people all throughout my life said I could be some kind of gang leader or Mafioso don because of certain behavior. Some parents forbade their children from hanging out with me. They’d wonder how I turned out so well—a family guy, successful, professional, never been to jail and all that.

I asked everybody that I knew, including psychiatrists and geneticists that have known me for a long time, and knew my bad behavior, what they thought. They went through very specific things that I had done over the years and said, “That’s psychopathic.” I asked them why they didn’t tell me and they said, “We did tell you. We’ve all been telling you.” I argued that they had called me “crazy,” and they all said, “No. We said you’re psychopathic.”

I found out that I happened to have a series of genetic alleles, “warrior genes,” that had to do with serotonin and were thought to be at risk for aggression, violence, and low emotional and interpersonal empathy—if you’re raised in an abusive environment. But if you’re raised in a very positive environment, that can have the effect of offsetting the negative effects of some of the other genes.

I had some geneticists and psychiatrists who didn’t know me examine me independently, and look at the whole series of disorders I’ve had throughout my life. None of them have been severe; I’ve had the mild form of things like anxiety disorder and OCD, but it lined up with my genetics.

The scientists said, “For one, you might never have been born.” My mother had miscarried several times and there probably were some genetic errors. They also said that if I hadn’t been treated so well, I probably wouldn’t have made it out of being a teenager. I would have committed suicide or have gotten killed, because I would have been a violent guy.

How did you react to hearing all of this?

I said, “Well, I don’t care.” And they said, “That proves that you have a fair dose of psychopathy.” Scientists don’t like to be wrong, and I’m narcissistic so I hate to be wrong, but when the answer is there before you, you have to suck it up, admit it, and move on. I couldn’t.

I started reacting with narcissism, saying, “Okay, I bet I can beat this. Watch me and I’ll be better.” Then I realized my own narcissism was driving that response. If you knew me, you’d probably say, “Oh, he’s a fun guy”–or maybe, “He’s a big-mouth and a blowhard narcissist”—but I also think you’d say, “All in all, he’s interesting, and smart, and okay.” But here’s the thing—the closer to me you are, the worse it gets. Even though I have a number of very good friends, they have all ultimately told me over the past two years when I asked them—and they were consistent even though they hadn’t talked to each other—that I do things that are quite irresponsible. It’s not like I say, Go get into trouble. I say, Jump in the water with me.

What’s an example of that, and how do you come back from hurting someone in that way?

For me, because I need these buzzes, I get into dangerous situations. Years ago, when I worked at the University of Nairobi Hospital, a few doctors had told me about AIDS in the region as well as the Marburg virus. They said a guy had come in who was bleeding out of his nose and ears, and that he had been up in the Elgon, in the Kitum Caves. I thought, “Oh, that’s where the elephants go,” and I knew I had to visit. I would have gone alone, but my brother was there. I told him it was an epic trek to where the old matriarch elephants went to retrieve minerals in the caves, but I didn’t mention anything else.

When we got there, there was a lot of rebel activity on the mountain, so there was nobody in the park except for one guard. So we just went in. There were all these rare animals and it was tremendous, but also, this guy had died from Marburg after being here, and nobody knew exactly how he’d gotten it. I knew his path and followed it to see where he camped.

That night, we wrapped ourselves around a fire because there were lions and all these other animals. We were jumping around and waving sticks on fire at the animals in the absolute dark. My brother was going crazy and I joked, “I have to put my head inside of yours because I have a family and you don’t, so if a lion comes and bites one of our necks, it’s gotta be you.”

Again, I was joking around, but it was a real danger. The next day, we walked into the Kitum Caves and you could see where rocks had been knocked over by the elephants.  There was also the smell of all of this animal dung—and that’s where the guy got the Marburg; scientists didn’t know whether it was the dung or the bats.

A bit later, my brother read an article in The New Yorker about Marburg, which inspired the movieOutbreak. He asked me if I knew about it. I said, “Yeah. Wasn’t it exciting? Nobody gets to do this trip.” And he called me names and said, “Not exciting enough. We could’ve gotten Marburg; we could have gotten killed every two seconds.” All of my brothers have a lot of machismo and brio; you’ve got to be a tough guy in our family. But deep inside, I don’t think that my brother fundamentally trusts me after that. And why should he, right? To me, it was nothing.

After all of this research, I started to think of this experience as an opportunity to do something good out of being kind of a jerk my entire life. Instead of trying to fundamentally change—because it’s very difficult to change anything—I wanted to use what could be considered faults, like narcissism, to an advantage; to do something good.

What has that involved?

I started with simple things of how I interact with my wife, my sister, and my mother. Even though they’ve always been close to me, I don’t treat them all that well. I treat strangers pretty well—really well, and people tend to like me when they meet me—but I treat my family the same way, like they’re just somebody at a bar. I treat them well, but I don’t treat them in a special way. That’s the big problem.

I asked them this—it’s not something a person will tell you spontaneously—but they said, ”I give you everything. I give you all this love and you really don’t give it back.” They all said it, and that sure bothered me. So I wanted to see if I could change. I don’t believe it, but I’m going to try.

In order to do that, every time I started to do something, I had to think about it, look at it, and go: No. Don’t do the selfish thing or the self-serving thing. Step-by-step, that’s what I’ve been doing for about a year and a half and they all like it. Their basic response is: We know you don’t really mean it, but we still like it.

I told them, “You’ve got to be kidding me. You accept this? It’s phony!” And they said, “No, it’s okay. If you treat people better it means you care enough to try.” It blew me away then and still blows me away now. 

But treating everyone the same isn’t necessarily a bad thing, is it? Is it just that the people close to you want more from you?

Yes. They absolutely expect and demand more. It’s a kind of cruelty, a kind of abuse, because you’re not giving them that love. My wife to this day says it’s hard to be with me at parties because I’ve got all these people around me, and I’ll leave her or other people in the cold. She is not a selfish person, but I can see how it can really work on somebody.

Related 

I gave a talk two years ago in India at the Mumbai LitFest on personality disorders and psychopathy, and we also had a historian from Oxford talk about violence against women in terms of the brain and social development. After it was over, a woman came up to me and asked if we could talk. She was a psychiatrist but also a science writer and said, “You said that you live in a flat emotional world—that is, that you treat everybody the same. That’s Buddhist.” I don’t know anything about Buddhism but she continued on and said, “It’s too bad that the people close to you are so disappointed in being close to you. Any learned Buddhist would think this was great.” I don’t know what to do with that.

Sometimes the truth is not just that it hurts, but that it’s just so disappointing. You want to believe in romance and have romance in your life—even the most hardcore, cold intellectual wants the romantic notion. It kind of makes life worth living. But with these kinds of things, you really start thinking about what a machine it means we are—what it means that some of us don’t need those feelings, while some of us need them so much. It destroys the romantic fabric of society in a way.

So what I do, in this situation, is think: How do I treat the people in my life as if I’m their son, or their brother, or their husband? It’s about going the extra mile for them so that they know I know this is the right thing to do. I know when the situation comes up, but my gut instinct is to do something selfish. Instead, I slow down and try to think about it. It’s like dumb behavioral modification; there’s no finesse to this, but I said, well, why does there have to be finesse? I’m trying to treat it as a straightaway thing, when the situation comes up, to realize there’s a chance that I might be wrong, or reacting in a poor way, or without any sort of love—like a human.

A few years ago there was an article in The New York Times called, “Can You Call a 9-Year-Old a Psychopath?" The subject was a boy named Michael whose family was concerned about him—he’d been diagnosed with several disorders and eventually deemed a possible psychopath by Dan Waschbusch, a researcher at Florida International University who studies "callous unemotional children." Dr. Waschbusch examines these children in hopes of finding possible treatment or rehabilitation. You mentioned earlier that you don’t believe people can fundamentally change; what is your take on this research?

In the 70’s, when I was still a post-doc student and a young professor, I started working with some psychiatrists and neurologists who would tell me that they could identify a probable psychopath when he or she was only 2 or 3 years old. I asked them why they didn’t tell the parents and they said, “There’s no way I’m going to tell anybody. First of all, you can’t be sure; second of all, it could destroy the kid’s life; and third of all, the media and the whole family will be at your door with sticks and knives.” So, when Dr. Waschbusch came out two years ago, it was like, “My god. He actually said it.” This was something that all psychiatrists and neurologists in the field knew—especially if they were pediatric psychologists and had the full trajectory of a kid’s life. It can be recognized very, very early—certainly before 9-years-old—but by that time the question of how to un-ring the bell is a tough one.

My bias is that even though I work in growth factors, plasticity, memory, and learning, I think the whole idea of plasticity in adults—or really after puberty—is so overblown. No one knows if the changes that have been shown are permanent and it doesn’t count if it’s only temporary. It’s like the Mozart Effect—sure, there are studies saying there is plasticity in the brain using a sound stimulation or electrical stimulation, but talk to this person in a year or two. Has anything really changed? An entire cottage industry was made from playing Mozart to pregnant women’s abdomens. That’s how the idea of plasticity gets out of hand. I think people can change if they devote their whole life to the one thing and stop all the other parts of their life, but that’s what people can’t do. You can have behavioral plasticity and maybe change behavior with parallel brain circuitry, but the number of times this happens is really rare.

So I really still doubt plasticity. I’m trying to do it by devoting myself to this one thing—to being a nice guy to the people that are close to me—but it’s a sort of game that I’m playing with myself because I don’t really believe it can be done, and it’s a challenge.

In some ways, though, the stakes are different for you because you’re not violent—and isn’t that the concern? Relative to your own life, your attempts to change may positively impact your relationships with your friends, family, and colleagues. But in the case of possibly violent people, they may harm others.

The jump from being a “prosocial” psychopath or somebody on the edge who doesn’t act out violently, to someone who really is a real, criminal predator is not clear. For me, I think I was protected because I was brought up in an upper-middle-class, educated environment with very supportive men and women in my family. So there may be a mass convergence of genetics and environment over a long period of time. But what would happen if I lost my family or lost my job; what would I then become? That’s the test.

For people who have the fundamental biology—the genetics, the brain patterns, and that early existence of trauma—first of all, if they’re abused they’re going to be pissed off and have a sense of revenge: I don’t care what happens to the world because I’m getting even. But a real, primary psychopath doesn’t need that. They’re just predators who don’t need to be angry at all; they do these things because of some fundamental lack of connection with the human race, and with individuals, and so on.

Someone who has money, and sex, and rock and roll, and everything they want may still be psychopathic—but they may just manipulate people, or use people, and not kill them. They may hurt others, but not in a violent way. Most people care about violence—that’s the thing. People may say, “Oh, this very bad investment counselor was a psychopath”—but the essential difference in criminality between that and murder is something we all hate and we all fear. It just isn’t known if there is some ultimate trigger. 

Link: The Problem with the Neuroscience Backlash

Link: Argument with Myself

Link: Robert Sapolsky: How Parasites Affect Human and Animal Behavior

The parasite my lab is beginning to focus on is one in the world of mammals, where parasites are changing mammalian behavior. It’s got to do with this parasite, this protozoan called Toxoplasma. If you’re ever pregnant, if you’re ever around anyone who’s pregnant, you know you immediately get skittish about cat feces, cat bedding, cat everything, because it could carry Toxo. And you do not want to get Toxoplasma into a fetal nervous system. It’s a disaster.

In the endless sort of struggle that neurobiologists have — in terms of free will, determinism — my feeling has always been that there’s not a whole lot of free will out there, and if there is, it’s in the least interesting places and getting more sparse all the time. But there’s a whole new realm of neuroscience which I’ve been thinking about, which I’m starting to do research on, that throws in another element of things going on below the surface affecting our behavior. And it’s got to do with this utterly bizarre world of parasites manipulating our behavior. It turns out that this is not all that surprising. There are all sorts of parasites out there that get into some organism, and what they need to do is parasitize the organism and increase the likelihood that they, the parasite, will be fruitful and multiply, and in some cases they can manipulate the behavior of the host.

Some of these are pretty astounding. There’s this barnacle that rides on the back of some crab and is able to inject estrogenic hormones into the crab if the crab is male, and at that point, the male’s behavior becomes feminized. The male crab digs a hole in the sand for his eggs, except he has no eggs, but the barnacle sure does, and has just gotten this guy to build a nest for him. There are other ones where wasps parasitize caterpillars and get them to defend the wasp’s nests for them. These are extraordinary examples.

The parasite my lab is beginning to focus on is one in the world of mammals, where parasites are changing mammalian behavior. It’s got to do with this parasite, this protozoan called Toxoplasma. If you’re ever pregnant, if you’re ever around anyone who’s pregnant, you know you immediately get skittish about cat feces, cat bedding, cat everything, because it could carry Toxo. And you do not want to get Toxoplasma into a fetal nervous system. It’s a disaster.

The normal life cycle for Toxo is one of these amazing bits of natural history. Toxo can only reproduce sexually in the gut of a cat. It comes out in the cat feces, feces get eaten by rodents. And Toxo’s evolutionary challenge at that point is to figure out how to get rodents inside cats’ stomachs. Now it could have done this in really unsubtle ways, such as cripple the rodent or some such thing. Toxo instead has developed this amazing capacity to alter innate behavior in rodents.

If you take a lab rat who is 5,000 generations into being a lab rat, since the ancestor actually ran around in the real world, and you put some cat urine in one corner of their cage, they’re going to move to the other side. Completely innate, hard-wired reaction to the smell of cats, the cat pheromones. But take a Toxo-infected rodent, and they’re no longer afraid of the smell of cats. In fact they become attracted to it. The most damn amazing thing you can ever see, Toxo knows how to make cat urine smell attractive to rats. And rats go and check it out and that rat is now much more likely to wind up in the cat’s stomach. Toxo’s circle of life completed.

This was reported by a group in the UK about half a dozen years ago. Not a whole lot was known about what Toxo was doing in the brain, so ever since, part of my lab has been trying to figure out the neurobiological aspects. The first thing is that it’s for real. The rodents, rats, mice, really do become attracted to cat urine when they’ve been infected with Toxo. And you might say, okay, well, this is a rodent doing just all sorts of screwy stuff because it’s got this parasite turning its brain into Swiss cheese or something. It’s just non-specific behavioral chaos. But no, these are incredibly normal animals. Their olfaction is normal, their social behavior is normal, their learning and memory is normal. All of that. It’s not just a generically screwy animal.

You say, okay well, it’s not that, but Toxo seems to know how to destroy fear and anxiety circuits. But it’s not that, either. Because these are rats who are still innately afraid of bright lights. They’re nocturnal animals. They’re afraid of big, open spaces. You can condition them to be afraid of novel things. The system works perfectly well there. Somehow Toxo can laser out this one fear pathway, this aversion to predator odors.

We started looking at this. The first thing we did was introduce Toxo into a rat and it took about six weeks for it to migrate from its gut up into its nervous system. And at that point, we looked to see, where has it gone in the brain? It formed cysts, sort of latent, encapsulated cysts, and it wound up all over the brain. That was deeply disappointing.

But then we looked at how much winds up in different areas in the brain, and it turned out Toxo preferentially knows how to home in on the part of the brain that is all about fear and anxiety, a brain region called the amygdala. The amygdala is where you do your fear conditioning; the amygdala is what’s hyperactive in people with post-traumatic stress disorder; the amygdala is all about pathways of predator aversion, and Toxo knows how to get in there.

Next, we then saw that Toxo would take the dendrites, the branch and cables that neurons have to connect to each other, and shriveled them up in the amygdala. It was disconnecting circuits. You wind up with fewer cells there. This is a parasite that is unwiring this stuff in the critical part of the brain for fear and anxiety. That’s really interesting. That doesn’t tell us a thing about why only its predator aversion has been knocked outwhereas fear of bright lights, et cetera, is still in there. It knows how to find that particular circuitry.

So what’s going on from there? What’s it doing? Because it’s not just destroying this fear aversive response, it’s creating something new. It’s creating an attraction to the cat urine. And here is where this gets utterly bizarre. You look at circuitry in the brain, and there’s a reasonably well-characterized circuit that activates neurons which become metabolically active circuits where they’re talking to each other, a reasonably well-understood process that’s involved in predator aversion. It involves neurons in the amygdala, the hypothalamus, and some other brain regions getting excited. This is a very well characterized circuit.

Link: Amnesia and the Self That Remains When Memory Is Lost

Tom was one of those people we all have in our lives — someone to go out to lunch with in a large group, but not someone I ever spent time with one-on-one. We had some classes together in college and even worked in the same cognitive psychology lab for a while. But I didn’t really know him. Even so, when I heard that he had brain cancer that would kill him in four months, it stopped me cold.

I was 19 when I first saw him — in a class taught by a famous neuropsychologist, Karl Pribram. I’d see Tom at the coffee house, the library, and around campus. He seemed perennially enthusiastic, and had an exaggerated way of moving that made him seem unusually focused. I found it uncomfortable to make eye contact with him, not because he seemed threatening, but because his gaze was so intense.

Once Tom and I were sitting next to each other when Pribram told the class about a colleague of his who had just died a few days earlier. Pribram paused to look out over the classroom and told us that his colleague had been one of the greatest neuropsychologists of all time. Pribram then lowered his head and stared at the floor for such a long time I thought he might have discovered something there. Without lifting his head, he told us that his colleague had been a close friend, and had telephoned a month earlier to say he had just been diagnosed with a brain tumor growing in his temporal lobe. The doctors said that he would gradually lose his memory — not his ability to form new memories, but his ability to retrieve old ones … in short, to understand who he was.

Tom’s hand shot up. To my amazement, he suggested that Pribram was overstating the connection between temporal-lobe memory and overall identity. Temporal lobe or not, you still like the same things, Tom argued — your sensory systems aren’t affected. If you’re patient and kind, or a jerk, he said, such personality traits aren’t governed by the temporal lobes.

Pribram was unruffled. Many of us don’t realize the connection between memory and self, he explained. Who you are is the sum total of all that you’ve experienced. Where you went to school, who your friends were, all the things you’ve done or — just as importantly — all the things you’ve always hoped to do. Whether you prefer chocolate ice cream or vanilla, action movies or comedies, is part of the story, but the ability to know those preferences through accumulated memory is what defines you as a person. This seemed right to me. I’m not just someone who likes chocolate ice cream, I’m someone who knows, who remembers that I like chocolate ice cream. And I remember my favorite places to eat it, and the people I’ve eaten it with.

Pribram walked up to the lectern and gripped it with both hands. When they had spoken last, his colleague seemed more sad than frightened. He was worried about the loss of self more than the loss of memory. He’d still have his intelligence, the doctors said, but no memories. “What good is one without the other?” his colleague had asked. That was the last time Pribram spoke to him.

From a friend, Pribram had learned that his colleague had decided to go to the Caribbean for a vacation with his wife. One day he just walked out into the ocean and never came back. He couldn’t swim; he must have gone out with the intention of not coming back — before the damage from the tumor could take hold, Pribram said.

The room was silent for 10 or 15 seconds — stone silent. I looked over at Tom’s notebook. “Neuropsychologist contemplates losing his mind,” Tom had written.

If he had lived, Pribram’s colleague would have experienced what neuroscientists call retrograde amnesia. This is the kind of amnesia that is most often dredged up as a plot element in bad comedies and cheap mystery stories; so-and-so gets hit on the head and then can’t remember who he is anymore, wanders around aimlessly, finding himself in zany predicaments, until he gets hit on the head again and his memory remarkably returns. This almost never occurs in real life. Although retrograde amnesia is real, it’s usually the result of a tumor, stroke, or other organic brain trauma. It isn’t restored by a knock on the head. Because they can still form new memories, patients with retrograde amnesia are acutely aware that they have a cognitive deficit, are painfully knowledgeable about what they are losing.

Link: Meet Your Mind: A User's Guide to the Science of Consciousness

Your thoughts and feelings, your joy and sorrow….it’s all part of your identity, of your consciousness. But what exactly is consciousness? It may be the biggest mystery left in science. And for a radio show that loves ‘Big Ideas,’ we had to take up the question.  

In our six-hour series, you’ll hear interviews with the world’s leading experts - neuroscientists, cognitive psychologists, philosophers, writers and artists. We’ll take you inside the brains of Buddhist monks, and across the ocean to visit France’s ancient cave paintings. We’ll tell you how to build a memory palace, and you’ll meet one of the first scientists to study the effects of LSD.

How do our brains work?  Are animals conscious? What about computers?  Will we ever crack the mystery of how the physical “stuff” of our brains produces mental experiences?

Mind and Brain: Neuroscientists have made remarkable discoveries about the brain, but we’re not close to cracking the mystery of consciousness.  How does a tangle of neurons inside your skull produce…you?

Memory and Forgetting: We explore the new science of memory and forgetting, how to build a memory palace, and how to erase a thought.

Wiring the Brain: Scientists are trying to develop a detailed map of the human brain.  For some scientists, the goal isn’t just to map the brain; it’s to crack the mystery of consciousness.

The Creative Brain: Creativity is a little like obscenity:  You know it when you see it, but you can’t exactly define it….unless you’re a neuroscientist.  In labs around the country, a new generation of scientists tackles the mystery of human creativity.

Extraordinary Minds: Certain brain disorders can lead to remarkable insights….even genius.  We’ll peer into the world of autistic savants and dyslexics, and contemplate our cyborg future, when our brains merge with tiny, embedded computers.

Higher Consciousness: Suppose neuroscientists map the billions of neural circuits in the human brain….are we any closer to cracking the great existential mysteries - like meaning, purpose or happiness?  Scientists and spiritual thinkers are now working together to create a new science of mindfulness.

Link: Neuroscience Challenges Criminal Law

Neuroscience is changing the meaning of criminal guilt. That might make us more, not less, responsible for our actions.

In the summer of 2008, police arrived at a caravan in the seaside town of Aberporth, west Wales, to arrest Brian Thomas for the murder of his wife. The night before, in a vivid nightmare, Thomas believed he was fighting off an intruder in the caravan – perhaps one of the kids who had been disturbing his sleep by revving motorbikes outside. Instead, he was gradually strangling his wife to death. When he awoke, he made a 999 call, telling the operator he was stunned and horrified by what had happened, and unaware of having committed murder.

Crimes committed by sleeping individuals are mercifully rare. Yet they provide striking examples of the unnerving potential of the human unconscious. In turn, they illuminate how an emerging science of consciousness is poised to have a deep impact upon concepts of responsibility that are central to today’s legal system.

After a short trial, the prosecution withdrew the case against Thomas. Expert witnesses agreed that he suffered from a sleep disorder known as pavor nocturnus, or night terrors, which affects around one per cent of adults and six per cent of children. His nightmares led him to do the unthinkable. We feel a natural sympathy towards Thomas, and jurors at his trial wept at his tragic situation. There is a clear sense in which this action was not the fault of an awake, thinking, sentient individual. But why do we feel this? What is it exactly that makes us think of Thomas not as a murderer but as an innocent man who has lost his wife in terrible circumstances?

Our sympathy can be understood with reference to laws that demarcate a separation between mind and body. A central tenet of the Western legal system is the concept of mens rea, or guilty mind. A necessary element to criminal responsibility is the guilty act — theactus reus. However, it is not enough simply to act: one must also be mentally responsible for acting in a particular way. The common law allows for those who are unable to conform to its requirements due to mental illness: the defence of insanity. It also allows for ‘diminished capacity’ in situations where the individual is deemed unable to form the required intent, or mens rea. Those people are understood to have control of their actions, without intending the criminal outcome. In these cases, the defendant may be found guilty of a lesser crime than murder, such as manslaughter.

In the case of Brian Thomas, the court was persuaded that his sleep disorder amounted to ‘automatism’, a comprehensive defence that denies there was even a guilty act. Automatism is the ultimate negation of both mens rea and actus reus. A successful defence of automatism implies that the accused person had neither awareness of what he was doing, nor any control over his actions. That he was so far removed from conscious awareness that he acted like a runaway machine.

The problem is how to establish if someone lacks a crucial aspect of consciousness when he commits a crime. In Thomas’s case, sleep experts provided evidence that his nightmares were responsible for his wife’s death. But in other cases, establishing lack of awareness has proved more elusive.

Link: Testosterone On My Mind And In My Brain

This is a hormone that has fascinated me. It’s a small molecule that seems to be doing remarkable things. The variation we see in this hormone comes from a number of different sources. One of those sources is genes; many different genes can influence how much testosterone each of us produces, and I just wanted to share with you my fascination with this hormone, because it’s helping us take the science of sex differences one step further, to try to understand not whether there are sex differences, but what are the roots of those sex differences? Where are they springing from? And along the way we’re also hoping that this is going to teach us something about those neuro-developmental conditions like autism, like delayed language development, which seem to disproportionately affect boys more than girls, and potentially help us understand the causes of those conditions.

What I want to talk about tonight is this very specific hormone, testosterone. Our lab has been doing a lot of research to understand what this hormone does and, in particular, to test whether it plays any role in how the mind and the brain develops.

Before I get to that point, I’ll say a few words by way of background about typical sex differences, because that’s the cradle out of which this new research comes. Many of you know that the topic of sex differences in psychology is fraught with controversy. It’s an area where people, for many decades, didn’t really want to enter because of the risks of political incorrectness, and of being misunderstood.

Perhaps of all of the areas in psychology where people do research, the field of sex differences was kind of off limits. It was taboo, and that was partly because people believed that anyone who tried to do research into whether boys and girls, on average, differ, must have some sexist agenda. And so for that reason a lot of scientists just wouldn’t even touch it.  

By 2003, I was beginning to sense that that political climate was changing, that it was a time when people could ask the question — do boys and girls differ? Do men and women differ? — without fear of being accused of some kind of sexist agenda, but in a more open-minded way.

First of all, I started off looking at neuroanatomy, to look at what the neuroscience is telling us about the male and female brain. If you just take groups of girls and groups of boys and, for example, put them into MRI scanners to look at the brain, you do see differences on average. Take the idea that the sexes are identical from the neck upwards, even if they are very clearly different from the neck downwards: the neuroscience is telling us that that is just a myth, that there are differences, even in terms of brain volume and the number of connections between nerve cells in the brain at the structure of the brain, on average, between males and females.

I say this carefully because it’s still a field which is prone to misunderstanding and misinterpretation, but just giving you some of the examples of findings that have come out of the neuroscience of sex differences, you find that the male brain, on average, is about eight percent larger than the female brain. We’re talking about a volumetric difference. It doesn’t necessarily mean anything, but that’s just a finding that’s consistently found. You find that difference from the earliest point you can put babies into the scanner, so some of the studies are at two weeks old in terms of infants.

You also find that if you look at postmortem tissue, looking at the human brain in terms of postmortem tissue, that the male brain has more connections, more synapses between nerve cells. It’s about a 30 percent difference on average between males and females. These differences are there.

The second big difference between males and females is about how much gray matter and white matter we see in the brain: that males have more gray matter and more white matter on average than the female brain does. White matter, just to be succinct, is mostly about connections between different parts of the brain. The gray matter is more about the cell bodies in the brain. But those differences exist. Then when you probe a bit further, you find that there are differences between the male and female brain in different lobes, the frontal lobe, the temporal lobe, in terms of how much gray and white matter there is.

You can also dissect the brain to look at specific regions. Some of you will have had heard of regions like the amygdala, which people think of as a sort of emotion center, that tends to be larger in the male brain than the female brain, again, on average. There’s another region that shows the opposite pattern, larger in females than males: the planum temporale, an area involved in language. These structural differences exist, and I started by looking at these differences in terms of neuroanatomy, just because I thought, at least those are differences that are rooted in biology, and there might be less scope for disagreement about basic differences.

I’ve talked a little bit about neuroanatomy, but in terms of psychology, there are also sex differences that are reported. On average, females are developing empathy at a faster rate than males. I keep using that word ‘on average’ because none of these findings apply to all females or all males. You simply see differences emerge when you compare groups of males and females. Empathy seems to be developing faster in girls and in contrast, in boys there seems to be a stronger drive to systemize. I use that word ‘systemizing’, which is all about trying to figure out how systems work, becoming fascinated with systems. And systems could take a variety of different forms. It could be a mechanical system, like a computer; it could be a natural system, like the weather; it could be an abstract system, like mathematics; but boys seem to have a stronger interest in systematic information. I was contrasting these two very different psychological processes, empathy and systemizing. And that’s about as far as I went, and that was now some 11 years ago.

Since then my lab has wanted to try to understand where these sex differences come from, and now I’m fast-forwarding to tell you about the work that we’re doing on testosterone. I’m very interested in this molecule, partly because males produce more of it than females, and partly because there’s a long tradition of animal research which shows that this hormone may masculinize the brain, but there’s very little work on this hormone in humans.

(Source: sunrec)

Link: Your Brain on Pseudoscience: the Rise of Popular Neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat? 

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscience explains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Link: I often have to cut into the brain

New Voices highlights the best emerging talents on granta.com. The latest in the series is Henry Marsh, a brain surgeon turned memoirist, whose piece here describes an operation on the deeply buried pineal gland

I often have to cut into the brain and it is something I hate doing. With a pair of short-wave diathermy forceps I coagulate a few millimetres of the brain’s surface, turning the living, glittering pia arachnoid – the transparent membrane that covers the brain – along with its minute and elegant blood vessels, into an ugly scab. With a pair of microscopic scissors I then cut the blood vessels and dig downwards with a fine sucker. I look down the operating microscope, feeling my way through the soft white substance of the brain, trying to find the tumour. The idea that I am cutting and pushing through thought itself, that memories, dreams and reflections should have the consistency of soft white jelly, is simply too strange to understand and all I can see in front of me is matter. Nevertheless, I know that if I stray into the wrong area, into what neurosurgeons call eloquent brain, I will be faced with a damaged and disabled patient afterwards. The brain does not come with helpful labels saying ‘Cut here’ or ‘Don’t cut there’. Eloquent brain looks no different from any other area of the brain, so when I go round to the Recovery Ward after the operation to see what I have achieved, I am always anxious.

There are various ways in which the risk of doing damage can be reduced. There is a form of GPS for brain surgery called Computer Navigation where, instead of satellites orbiting the Earth, there are infrared cameras around the patient’s head which show the surgeon on a computer screen where his instruments are on the patient’s brain scan. You can operate with the patient awake The idea that … memories, dreams and reflections should have the consistency of soft white jelly, is simply too strange to understand under local anaesthetic: the eloquent areas of the brain can then be identified by stimulating the brain with an electrode and by giving the patient simple tasks to perform so that one can see if one is causing any damage as the operation proceeds. And then there is skill and experience and knowing when to stop. Quite often one must decide that it is better not to start in the first place and declare the tumour inoperable. Despite these methods, however, much still depends on luck, both good and bad. As I become more and more experienced, it seems that luck becomes ever more important.

I had a patient who had a tumour of the pineal gland. The dualist philosopher Descartes, who argued that mind and brain are entirely separate entities, placed the human soul in the pineal gland. It was here, he said, that the material brain in some magical and mysterious way communicated with the mind and with the immaterial soul. I wonder what he would have said if he could have seen my patients looking at their own brains on a video monitor, as some of them do when I operate under local anaesthetic.

Pineal tumours are very rare. They can be benign and they can be malignant. The benign ones do not necessarily need treatment. The malignant ones are treated with radiotherapy and chemotherapy but can prove fatal nevertheless. In the past they were considered to be inoperable but with modern microscopic neurosurgery this is no longer the case: it is usually now considered necessary to operate at least to obtain a biopsy – to remove a small part of the tumour for a precise diagnosis of the type so that you can then decide how best to treat it. The biopsy result will tell you whether to remove all of the tumour or whether to leave most of it in place, and whether the patient needs radiotherapy and chemotherapy. Since the pineal is buried deep in the middle of the brain the operation is, as surgeons say, a technical challenge; neurosurgeons look with awe and excitement at brain scans showing pineal tumours, like mountaineers looking up at a great peak that they hope to climb. To make matters worse, this particular patient – a very fit and athletic man in his thirties who had developed severe headaches as the tumour obstructed the normal circulation of cerebro-spinal fluid around his brain – had found it very hard to accept that he had a life-threatening illness and that his life was now out of his control. I had had many anxious conversations and phone calls with him over the days before the operation. I explained that the risks of the surgery, which included death or a major stroke, were ultimately less than the risks of not operating. He laboriously typed everything I said into his smartphone, as if taking down the long words – obstructive hydrocephalus, endoscopic ventriculostomy, pineocytoma, pineoblastoma – would somehow put him back in charge and save him. Anxiety is contagious – it is one of the reasons surgeons must distance themselves from their patients – and his anxiety, combined with my feeling of profound failure about an operation I had carried out a week earlier meant that I faced the prospect of operating upon him with dread. I had seen him the night before the operation. When I see my patients the night before surgery I try not to dwell on the risks of the operation ahead, which I will already have discussed in detail at an earlier meeting. His wife was sitting beside him, looking quite sick with fear.

Link: Amazing Memory

Scientists are taking a closer look at the extremely rare people who remember everything from their pasts. And yes, their brains are different.

At last count, at least 33 people in the world could tell you what they ate for breakfast, lunch and dinner, on February 20, 1998. Or who they talked to on October 28, 1986. Pick any date and they can pull from their memory the most prosaic details of that thin slice of their personal history.

Others, no doubt, have this remarkable ability, but so far only those 33 have been confirmed by scientific research. The most famous is probably actress Marilu Henner, who showed off her stunning recall of autobiographical minutiae on “60 Minutes” a few years ago.

What makes this condition, known as hyperthymesia, so fascinating is that it’s so selective. These are not savants who can rattle off long strings of numbers, Rainman-style, or effortlessly retrieve tidbits from a deep vault of historical facts. In fact, they generally perform no better on standard memory tests than the rest of us.

Nope, only in the recollection of the days of their lives are they exceptional.

How does science explain it? Well, the research is still a bit limited, but recently scientists at the University of California at Irvine, published a report on 11 people with superior autobiographical memory. They found, not surprisingly, that their brains are different. They had stronger “white matter” connections between their mid and forebrains, when compared with the control subjects. Also, the region of the brain often associated with Obsessive-Compulsive Disorder (OCD), was larger than normal.

In line with that discovery, the researchers determined that the study’s subjects were more likely then usual to have OCD tendencies. Many were collectors–of magazines, shoes, videos, stamps, postcards–the type of collectors who keep intricately detailed catalogs of their prized possessions.

The scientists are wary, as yet, of drawing any conclusions. They don’t know how much, or even if that behavior is directly related to a person’s autobiographical memory. But they’re anxious to see where this leads and what it might teach them about how memory works.

Is it all about how brain structures communicate? Is it genetic? Is it molecular? To follow the clues, they’re analyzing at least another three dozen people who also seem to have the uncanny ability to retrieve their pasts in precisely-drawn scenes.

Link: The Cambridge Declaration on Consciousness

A group of leading neuroscientists has used a conference at Cambridge University to make an official declaration recognising consciousness in animals.The declaration was made at the Francis Crick Memorial Conference and signed by some of the leading lights in consciousness research, including Christof Koch and David Edelman. Check the videos of the conference out: http://fcmconference.org and the full text of The Cambridge Declaration on Consciousness, which concludes:

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non- human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

Also, a very good post on Earth in Transition, by Michael Mountain, who says:

“It’s a really important statement that will be used as evidence by those who are pushing for scientists to develop a more humane relationship with animals. It’s harder, for example, to justify experiments on nonhumans when you know that they are conscious beings and not just biological machines. Some of the conclusions reached in this declaration are the product of scientists who, to this day, still conduct experiments on animals in captivity, including dolphins, who are among the most intelligent species on Earth. Their own declaration will now be used as evidence that it’s time to stop using these animals in captivity and start finding new ways of making a living.”

and an article in Psychology Today: Scientists Finally Conclude Nonhuman Animals Are Conscious Beings

It’s said that repetition is boring conversation but there’s now a wealth of scientific data that makes skepticism, and surely agnosticism, to be anti-science and harmful to animals. Now, at last, the prestigious Cambridge group shows this to be so. Bravo for them! So, let’s all work together to use this information to stop the abuse of millions upon millions of conscious animals in the name of science, education, food, amusement and entertainment, and clothing. We really owe it to them to use what we know on their behalf and to factor compassion and empathy into our treatment of these amazing beings.

Link: Does Contemporary Neuroscience Support or Challenge the Reality of Free Will?

All the world’s a stage, and all the men and women merely players.— Shakespeare

Humans love stories.  We tell each other the stories of our lives, in which we are not merely players reading a script but also the authors.  As authors we make choices that influence the plot and the other players on the stage.  Free will can be understood as our capacities both to make choices—to write our own stories—and to carry them out on the world’s stage—to control our actions in light of our choices.

What would it mean to lack free will?  It might mean we are merely puppets, our strings pulled by forces beyond our awareness and beyond our control.  It might mean we are players who merely act out a script we do not author.  Or perhaps we think we make up our stories, but in fact we do so only after we’ve already acted them out.  The central image in each case is that we merely observe what happens, rather than making a difference to what happens.

How might neuroscience fit into the story I am telling?  Most scientists who discuss free will say the story has an unhappy ending—that neuroscience shows free will to be an illusion.  I call these scientists “willusionists.” (Willusionists include Sam Harris, Jerry Coyne, Jonathan Bargh, Daniel Wegner, John Dylan Haynes, and as suggested briefly in some of their work, Stephen Hawking and Richard Dawkins.) Willusionists say that neuroscience demonstrates that we are not the authors of our own stories but more like puppets whose actions are determined by brain events beyond our control.  In his new book Free Will, Sam Harris says, “This [neuroscientific] understanding reveals you to be a biochemical puppet.” Jerry Coyne asserts in a USAToday column: “The ineluctable scientific conclusion is that although we feel that we’re characters in the play of our lives, rewriting our parts as we go along, in reality we’re puppets performing scripted parts written by the laws of physics.”

There are several ways willusionists reach their conclusion that we lack free will.  The first begins by defining free will in a dubious way.  Most willusionists’ assume that, by definition, free will requires a supernatural power of non-physical minds or souls:  it’s only possible if we are somehow offstage, beyond the causal interactions of the natural world, yet also somehow able to pull the strings of our bodies nonetheless.(For example,Read Montague.)  It’s a mysterious picture, and one that willusionists simply assert is the ordinary understanding of free will.  Based on this definition of free will, they then conclude that neuroscience challenges free will, since it replaces a non-physical mind or soul with a physical brain. 

But there is no reason to define free will as requiring this dualist picture.  Among philosophers, very few develop theories of free will that conflict with a naturalistic understanding of the mind—free will requires choice and control, and for some philosophers, indeterminism, but it does not require dualism.  Furthermore, studies on ordinary people’s understanding of free will show that, while many people believe we have souls, most do not believe that free will requires a non-physical soul.  And when presented scenarios about persons whose decisions are fully caused by earlier events, or even fully predictable by brain events, most people respond that they still have free will and are morally responsible.   These studies strongly suggest that what people primarily associate with free will and moral responsibility is the capacity to make conscious decisions and to control one’s actions in light of such decisions.

But willusionists also argue that neuroscience challenges free will by challenging this role for consciousness in decision-making and action.  Research by Benjamin Libet, and more recently by neuroscientists such as John Dylan Haynes, suggests that activity in the brain regularly precedes behavior—no surprise there!—but also precedes our conscious awareness of making a decision to move.  For instance, in one study neural activity measured by fMRI provided information about which of two buttons people would push up to 7-10 seconds before they were aware of deciding which to push.

Link: Locked-in Syndrome

When Richard Marsh had a stroke doctors wanted to switch off his life-support – but he could hear every word but could not tell them he was alive. Now 95% recovered, he recounts his story.

Two days after regaining consciousness from a massive stroke, Richard Marsh watched helplessly from his hospital bed as doctors asked his wife, Lili, whether they should turn off his life support machine.

Marsh, a former police officer and teacher, had strong views on that suggestion. The 60-year-old didn’t want to die. He wanted the ventilator to stay on. He was determined to walk out of the intensive care unit and he wanted everyone to know it.

But Marsh couldn’t tell anyone that. The medics believed he was in a persistent vegetative state, devoid of mental consciousness or physical feeling.

Nothing could have been further from the truth. Marsh was aware, alert and fully able to feel every touch to his body.

"I had full cognitive and physical awareness," he said. "But an almost complete paralysis of nearly all the voluntary muscles in my body."

The first sign that Marsh was recovering was with twitching in his fingers which spread through his hand and arm. He describes the feeling of accomplishment at being able to scratch his own nose again. But it’s still a mystery as to why he recovered when the vast majority of locked-in syndrome victims do not.

"They don’t know why I recovered because they don’t know why I had locked-in in the first place or what really to do about it. Lots of the doctors and medical experts I saw didn’t even know what locked-in was. If they did know anything, it was usually because they’d had a paragraph about it during their medical training. No one really knew anything."

Marsh has never spoken publicly about his experience before. But in an exclusive interview with the Guardian, he gave a rare and detailed insight into what it is like to be “locked in”.

"All I could do when I woke up in ICU was blink my eyes," he remembered. "I was on life support with a breathing machine, with tubes and wires on every part of my body, and a breathing tube down my throat. I was in a severe locked in-state for some time. Things looked pretty dire.

"My brain protected me – it didn’t let me grasp the seriousness of the situation. It’s weird but I can remember never feeling scared. I knew my cognitive abilities were 100%. I could think and hear and listen to people but couldn’t speak or move. The doctors would just stand at the foot of the bed and just talk like I wasn’t in the room. I just wanted to holler: ‘Hey people, I’m still here!’ But there was no way to let anyone know."