Sunshine Recorder

Link: Argument with Myself

Link: Meet Your Mind: A User's Guide to the Science of Consciousness

Your thoughts and feelings, your joy and sorrow….it’s all part of your identity, of your consciousness. But what exactly is consciousness? It may be the biggest mystery left in science. And for a radio show that loves ‘Big Ideas,’ we had to take up the question.  

In our six-hour series, you’ll hear interviews with the world’s leading experts - neuroscientists, cognitive psychologists, philosophers, writers and artists. We’ll take you inside the brains of Buddhist monks, and across the ocean to visit France’s ancient cave paintings. We’ll tell you how to build a memory palace, and you’ll meet one of the first scientists to study the effects of LSD.

How do our brains work?  Are animals conscious? What about computers?  Will we ever crack the mystery of how the physical “stuff” of our brains produces mental experiences?

Mind and Brain: Neuroscientists have made remarkable discoveries about the brain, but we’re not close to cracking the mystery of consciousness.  How does a tangle of neurons inside your skull produce…you?

Memory and Forgetting: We explore the new science of memory and forgetting, how to build a memory palace, and how to erase a thought.

Wiring the Brain: Scientists are trying to develop a detailed map of the human brain.  For some scientists, the goal isn’t just to map the brain; it’s to crack the mystery of consciousness.

The Creative Brain: Creativity is a little like obscenity:  You know it when you see it, but you can’t exactly define it….unless you’re a neuroscientist.  In labs around the country, a new generation of scientists tackles the mystery of human creativity.

Extraordinary Minds: Certain brain disorders can lead to remarkable insights….even genius.  We’ll peer into the world of autistic savants and dyslexics, and contemplate our cyborg future, when our brains merge with tiny, embedded computers.

Higher Consciousness: Suppose neuroscientists map the billions of neural circuits in the human brain….are we any closer to cracking the great existential mysteries - like meaning, purpose or happiness?  Scientists and spiritual thinkers are now working together to create a new science of mindfulness.

Link: Why IQ Rises

"The Flynn effect does not reflect gains in general intelligence, it reflects a shift to more abstract thinking brought about by a changing social environment. We aren’t getting smarter; we are getting more modern."

In the mid-’80s, the political philosopher James Flynn noticed a remarkable but puzzling trend: for the past century, average IQ scores in every industrialized nation have been steadily rising. And not just a little: nearly three points every decade. Every several years, IQ tests test have to be “re-normed” so that the average remains 100. This means that a person who scored 100 a century ago would score 70 today; a person who tested as average a century ago would today be declared mentally retarded.

This bizarre finding—christened the “Flynn effect” by Richard Herrnstein and Charles Murray inThe Bell Curve—has since snowballed so much supporting evidence that in 2007 Malcolm Gladwell declared in The New Yorker that “the Flynn effect has moved from theory to fact.” But researchers still cannot agree on why scores are going up. Are we are simply getting better at taking tests? Are the tests themselves a poor measure of intelligence? Or do rising IQ scores really mean we are getting smarter?

In spite of his new book’s title, Flynn does not suggest a simple yes or no to this last question. It turns out that the greatest gains have taken place in subtests that measure abstract reasoning and pattern recognition, while subtests that depend more on previous knowledge show the lowest score increases. This imbalance may not reflect an increase in general intelligence, Flynn argues, but a shift in particular habits of mind. The question is not, why are we getting smarter, but the much less catchy, why are we getting better at abstract reasoning and little else?

Flynn starts from a position that accepts the idea of IQ—a measure that supposedly reflects an underlying “general” intelligence. Some researchers have objected to this concept in part because of its circular definition: psychologists measure general intelligence by analyzing correlation patterns among multiple intelligence tests; someone with greater general intelligence will perform better on all these subtests. But although he does not quibble with the premise, Flynn argues that an increase in general intelligence is not the full story when it comes to the past century’s massive score gains.

If we were really getting smarter overall, scores should be going up across all the subtests, but that is not the case. To understand the score gains, then, we need to set aside issues of general intelligence and instead analyze patterns on the IQ subtests. Doing so opens a window into cognitive trends over time and reveals a far more interesting picture of what may be happening to our minds. This inquiry is at the heart of Flynn’s thirty-year career, and it drives his thoughtful (though occasionally tedious) book.

As Flynn demonstrates, a typical IQ test question on the abstract reasoning “Similarities” subtest might ask “How are dogs and rabbits alike?” While our grandparents were more likely to say something along the lines of “Dogs are used to hunt rabbits,” today we are more likely to say the “correct” answer, “Dogs and rabbits are both mammals.” Our grandparents were more likely to see the world in concrete, utilitarian terms (dogs hunt rabbits), but today we are more likely to think in abstractions (the category of “mammal”). In contrast, the Arithmetic IQ subtest and the Vocabulary IQ subtest—tests that rely on previous knowledge—show hardly any score increase at all.

Why has this happened? The short answer, according to Flynn, is that a convergence of diverse social factors in post-industrial societies—from the emphasis of scientific reasoning in school to the complexity of modern video games—has increasingly demanded abstract thinking. We have begun to see the world, Flynn says, through “scientific spectacles.” To put it even more broadly, the pattern of rising IQ scores does not mean that we are comparing “a worse mind with a better one,” but rather that we are comparing minds that “were adapted to one cognitive environment with those whose minds are adapted to another cognitive environment.” Seen in this light, the Flynn effect does not reflect gains in general intelligence, it reflects a shift to more abstract thinking brought about by a changing social environment. We aren’t getting smarter; we are getting more modern.

Link: Testosterone On My Mind And In My Brain

This is a hormone that has fascinated me. It’s a small molecule that seems to be doing remarkable things. The variation we see in this hormone comes from a number of different sources. One of those sources is genes; many different genes can influence how much testosterone each of us produces, and I just wanted to share with you my fascination with this hormone, because it’s helping us take the science of sex differences one step further, to try to understand not whether there are sex differences, but what are the roots of those sex differences? Where are they springing from? And along the way we’re also hoping that this is going to teach us something about those neuro-developmental conditions like autism, like delayed language development, which seem to disproportionately affect boys more than girls, and potentially help us understand the causes of those conditions.

What I want to talk about tonight is this very specific hormone, testosterone. Our lab has been doing a lot of research to understand what this hormone does and, in particular, to test whether it plays any role in how the mind and the brain develops.

Before I get to that point, I’ll say a few words by way of background about typical sex differences, because that’s the cradle out of which this new research comes. Many of you know that the topic of sex differences in psychology is fraught with controversy. It’s an area where people, for many decades, didn’t really want to enter because of the risks of political incorrectness, and of being misunderstood.

Perhaps of all of the areas in psychology where people do research, the field of sex differences was kind of off limits. It was taboo, and that was partly because people believed that anyone who tried to do research into whether boys and girls, on average, differ, must have some sexist agenda. And so for that reason a lot of scientists just wouldn’t even touch it.  

By 2003, I was beginning to sense that that political climate was changing, that it was a time when people could ask the question — do boys and girls differ? Do men and women differ? — without fear of being accused of some kind of sexist agenda, but in a more open-minded way.

First of all, I started off looking at neuroanatomy, to look at what the neuroscience is telling us about the male and female brain. If you just take groups of girls and groups of boys and, for example, put them into MRI scanners to look at the brain, you do see differences on average. Take the idea that the sexes are identical from the neck upwards, even if they are very clearly different from the neck downwards: the neuroscience is telling us that that is just a myth, that there are differences, even in terms of brain volume and the number of connections between nerve cells in the brain at the structure of the brain, on average, between males and females.

I say this carefully because it’s still a field which is prone to misunderstanding and misinterpretation, but just giving you some of the examples of findings that have come out of the neuroscience of sex differences, you find that the male brain, on average, is about eight percent larger than the female brain. We’re talking about a volumetric difference. It doesn’t necessarily mean anything, but that’s just a finding that’s consistently found. You find that difference from the earliest point you can put babies into the scanner, so some of the studies are at two weeks old in terms of infants.

You also find that if you look at postmortem tissue, looking at the human brain in terms of postmortem tissue, that the male brain has more connections, more synapses between nerve cells. It’s about a 30 percent difference on average between males and females. These differences are there.

The second big difference between males and females is about how much gray matter and white matter we see in the brain: that males have more gray matter and more white matter on average than the female brain does. White matter, just to be succinct, is mostly about connections between different parts of the brain. The gray matter is more about the cell bodies in the brain. But those differences exist. Then when you probe a bit further, you find that there are differences between the male and female brain in different lobes, the frontal lobe, the temporal lobe, in terms of how much gray and white matter there is.

You can also dissect the brain to look at specific regions. Some of you will have had heard of regions like the amygdala, which people think of as a sort of emotion center, that tends to be larger in the male brain than the female brain, again, on average. There’s another region that shows the opposite pattern, larger in females than males: the planum temporale, an area involved in language. These structural differences exist, and I started by looking at these differences in terms of neuroanatomy, just because I thought, at least those are differences that are rooted in biology, and there might be less scope for disagreement about basic differences.

I’ve talked a little bit about neuroanatomy, but in terms of psychology, there are also sex differences that are reported. On average, females are developing empathy at a faster rate than males. I keep using that word ‘on average’ because none of these findings apply to all females or all males. You simply see differences emerge when you compare groups of males and females. Empathy seems to be developing faster in girls and in contrast, in boys there seems to be a stronger drive to systemize. I use that word ‘systemizing’, which is all about trying to figure out how systems work, becoming fascinated with systems. And systems could take a variety of different forms. It could be a mechanical system, like a computer; it could be a natural system, like the weather; it could be an abstract system, like mathematics; but boys seem to have a stronger interest in systematic information. I was contrasting these two very different psychological processes, empathy and systemizing. And that’s about as far as I went, and that was now some 11 years ago.

Since then my lab has wanted to try to understand where these sex differences come from, and now I’m fast-forwarding to tell you about the work that we’re doing on testosterone. I’m very interested in this molecule, partly because males produce more of it than females, and partly because there’s a long tradition of animal research which shows that this hormone may masculinize the brain, but there’s very little work on this hormone in humans.

(Source: sunrec)

Link: I often have to cut into the brain

New Voices highlights the best emerging talents on granta.com. The latest in the series is Henry Marsh, a brain surgeon turned memoirist, whose piece here describes an operation on the deeply buried pineal gland

I often have to cut into the brain and it is something I hate doing. With a pair of short-wave diathermy forceps I coagulate a few millimetres of the brain’s surface, turning the living, glittering pia arachnoid – the transparent membrane that covers the brain – along with its minute and elegant blood vessels, into an ugly scab. With a pair of microscopic scissors I then cut the blood vessels and dig downwards with a fine sucker. I look down the operating microscope, feeling my way through the soft white substance of the brain, trying to find the tumour. The idea that I am cutting and pushing through thought itself, that memories, dreams and reflections should have the consistency of soft white jelly, is simply too strange to understand and all I can see in front of me is matter. Nevertheless, I know that if I stray into the wrong area, into what neurosurgeons call eloquent brain, I will be faced with a damaged and disabled patient afterwards. The brain does not come with helpful labels saying ‘Cut here’ or ‘Don’t cut there’. Eloquent brain looks no different from any other area of the brain, so when I go round to the Recovery Ward after the operation to see what I have achieved, I am always anxious.

There are various ways in which the risk of doing damage can be reduced. There is a form of GPS for brain surgery called Computer Navigation where, instead of satellites orbiting the Earth, there are infrared cameras around the patient’s head which show the surgeon on a computer screen where his instruments are on the patient’s brain scan. You can operate with the patient awake The idea that … memories, dreams and reflections should have the consistency of soft white jelly, is simply too strange to understand under local anaesthetic: the eloquent areas of the brain can then be identified by stimulating the brain with an electrode and by giving the patient simple tasks to perform so that one can see if one is causing any damage as the operation proceeds. And then there is skill and experience and knowing when to stop. Quite often one must decide that it is better not to start in the first place and declare the tumour inoperable. Despite these methods, however, much still depends on luck, both good and bad. As I become more and more experienced, it seems that luck becomes ever more important.

I had a patient who had a tumour of the pineal gland. The dualist philosopher Descartes, who argued that mind and brain are entirely separate entities, placed the human soul in the pineal gland. It was here, he said, that the material brain in some magical and mysterious way communicated with the mind and with the immaterial soul. I wonder what he would have said if he could have seen my patients looking at their own brains on a video monitor, as some of them do when I operate under local anaesthetic.

Pineal tumours are very rare. They can be benign and they can be malignant. The benign ones do not necessarily need treatment. The malignant ones are treated with radiotherapy and chemotherapy but can prove fatal nevertheless. In the past they were considered to be inoperable but with modern microscopic neurosurgery this is no longer the case: it is usually now considered necessary to operate at least to obtain a biopsy – to remove a small part of the tumour for a precise diagnosis of the type so that you can then decide how best to treat it. The biopsy result will tell you whether to remove all of the tumour or whether to leave most of it in place, and whether the patient needs radiotherapy and chemotherapy. Since the pineal is buried deep in the middle of the brain the operation is, as surgeons say, a technical challenge; neurosurgeons look with awe and excitement at brain scans showing pineal tumours, like mountaineers looking up at a great peak that they hope to climb. To make matters worse, this particular patient – a very fit and athletic man in his thirties who had developed severe headaches as the tumour obstructed the normal circulation of cerebro-spinal fluid around his brain – had found it very hard to accept that he had a life-threatening illness and that his life was now out of his control. I had had many anxious conversations and phone calls with him over the days before the operation. I explained that the risks of the surgery, which included death or a major stroke, were ultimately less than the risks of not operating. He laboriously typed everything I said into his smartphone, as if taking down the long words – obstructive hydrocephalus, endoscopic ventriculostomy, pineocytoma, pineoblastoma – would somehow put him back in charge and save him. Anxiety is contagious – it is one of the reasons surgeons must distance themselves from their patients – and his anxiety, combined with my feeling of profound failure about an operation I had carried out a week earlier meant that I faced the prospect of operating upon him with dread. I had seen him the night before the operation. When I see my patients the night before surgery I try not to dwell on the risks of the operation ahead, which I will already have discussed in detail at an earlier meeting. His wife was sitting beside him, looking quite sick with fear.

Link: Amazing Memory

Scientists are taking a closer look at the extremely rare people who remember everything from their pasts. And yes, their brains are different.

At last count, at least 33 people in the world could tell you what they ate for breakfast, lunch and dinner, on February 20, 1998. Or who they talked to on October 28, 1986. Pick any date and they can pull from their memory the most prosaic details of that thin slice of their personal history.

Others, no doubt, have this remarkable ability, but so far only those 33 have been confirmed by scientific research. The most famous is probably actress Marilu Henner, who showed off her stunning recall of autobiographical minutiae on “60 Minutes” a few years ago.

What makes this condition, known as hyperthymesia, so fascinating is that it’s so selective. These are not savants who can rattle off long strings of numbers, Rainman-style, or effortlessly retrieve tidbits from a deep vault of historical facts. In fact, they generally perform no better on standard memory tests than the rest of us.

Nope, only in the recollection of the days of their lives are they exceptional.

How does science explain it? Well, the research is still a bit limited, but recently scientists at the University of California at Irvine, published a report on 11 people with superior autobiographical memory. They found, not surprisingly, that their brains are different. They had stronger “white matter” connections between their mid and forebrains, when compared with the control subjects. Also, the region of the brain often associated with Obsessive-Compulsive Disorder (OCD), was larger than normal.

In line with that discovery, the researchers determined that the study’s subjects were more likely then usual to have OCD tendencies. Many were collectors–of magazines, shoes, videos, stamps, postcards–the type of collectors who keep intricately detailed catalogs of their prized possessions.

The scientists are wary, as yet, of drawing any conclusions. They don’t know how much, or even if that behavior is directly related to a person’s autobiographical memory. But they’re anxious to see where this leads and what it might teach them about how memory works.

Is it all about how brain structures communicate? Is it genetic? Is it molecular? To follow the clues, they’re analyzing at least another three dozen people who also seem to have the uncanny ability to retrieve their pasts in precisely-drawn scenes.

Link: The Cambridge Declaration on Consciousness

A group of leading neuroscientists has used a conference at Cambridge University to make an official declaration recognising consciousness in animals.The declaration was made at the Francis Crick Memorial Conference and signed by some of the leading lights in consciousness research, including Christof Koch and David Edelman. Check the videos of the conference out: http://fcmconference.org and the full text of The Cambridge Declaration on Consciousness, which concludes:

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non- human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

Also, a very good post on Earth in Transition, by Michael Mountain, who says:

“It’s a really important statement that will be used as evidence by those who are pushing for scientists to develop a more humane relationship with animals. It’s harder, for example, to justify experiments on nonhumans when you know that they are conscious beings and not just biological machines. Some of the conclusions reached in this declaration are the product of scientists who, to this day, still conduct experiments on animals in captivity, including dolphins, who are among the most intelligent species on Earth. Their own declaration will now be used as evidence that it’s time to stop using these animals in captivity and start finding new ways of making a living.”

and an article in Psychology Today: Scientists Finally Conclude Nonhuman Animals Are Conscious Beings

It’s said that repetition is boring conversation but there’s now a wealth of scientific data that makes skepticism, and surely agnosticism, to be anti-science and harmful to animals. Now, at last, the prestigious Cambridge group shows this to be so. Bravo for them! So, let’s all work together to use this information to stop the abuse of millions upon millions of conscious animals in the name of science, education, food, amusement and entertainment, and clothing. We really owe it to them to use what we know on their behalf and to factor compassion and empathy into our treatment of these amazing beings.

Link: How I Hacked My Brain With Adderall

Evolution is a nice, big idea. It connotes the glacial pace of an unmeditated act unfolding upon species, concepts, and ecosystems. It certainly doesn’t usually get branded as a feeling. But a couple months ago I felt this thing. Maybe a little like what a mommy feels when her fetus kicks the wall crossed with how the baby feels when it gets its pre-K diploma, and the best word I can come up with for it is evolution. Not the glacial kind, but the real-time, Matrix-flavored kind. I was too busy barreling through the wicked pipe of a 30-milligram Adderall to think about it much when it happened, though. Half an hour into my sunrise dose, I logged into Lynda.com, the extraordinarily put-together training site used by corporate operations to keep their employees up on hot software trends. As an avid Monday Night Football chyron fan, I had promised myself for years that I would learn After Effects as soon as I had the free time; the chemical wave pushed me through an especially potent laziness that has always kept me from becoming the motion graphics expert I knew I wanted to be.

There I sat, glued to my chair, watching the instructional videos on my laptop, guzzling Coke Zero, and practicing in the software on my external monitor. I optimized my posture over the course of the first few hours, ironing out repetitive stress pain as it came along, taking smoke breaks between every chapter: “Getting Started With After Effects,” “Learning to Animate,” “Precomposing and Nesting Compositions.” As the sun dipped below the horizon, I found myself at chapter 19: “Rendering and Compression,” and finally, at dusk, Chapter 21: “Conclusion.” My virtual instructor Chad Perkins wished me well in my motion graphics career and I realized that I was falling in love with him: his nimble pick-whipping and soothing color correction subtly opening the door to my racing heart. I became one with him, and, as a result, I internalized After Effects. As the credits rolled, Neo flashed into my head. “I know After Effects,” he said, opening his eyes and staring up at Morpheus through my corneas. Less than a month later, I had landed my first job producing visuals for a national tour for the popular electronic jam band Sound Tribe Sector 9. Although they weren’t scrolling inches below Mike Ditka’s moustache, my motion graphics were now going to be seen by hundreds of thousands of people on a 30-foot LED pyramid, and my ESPN debut felt so close I could smile at it. This was life on speed; this was me jacking into the Matrix; this was a repeatable equation: Lynda + Adderall = Knowledge + Skills. Something powerful came to life here, something new and useful and limitless that had incubated inside me for 29 years, but wouldn’t come out without the help of a shitload of tiny pink pills.

…Adderall’s active ingredient is amphetamine. Like most useful drugs, amphetamine has been used in one form or another for centuries (popular references include the hospital scene from Downton Abbey Episode One, Dexys Midnight Runners, and, looking ahead, the Neurachem of Richard Morgan’s Altered Carbon). Just as Merck used to manufacture the best cocaine on the entire planet, Shire now produces the most reliable amphetamine money can buy. And it’s good shit, so good that it sits with opium and cocaine on the USA’s List of Schedule II Controlled substances.

Getting a prescription for Adderall is not hard. Nearly every major city has a weekly alternative news and arts magazine that is half-full of ads for escorts and pain management clinics. You’ve probably seen them: “CHRONIC BACK PAIN? GLAUCOMA? ADULT ADHD? WE CAN HELP.” I called one of these clinics, located in a zip code painfully familiar to all fans of early-90s teen sitcoms, and scheduled a consultation about my presumptive case of Attention Deficit Hyperactivity Disorder. After answering a few multiple-choice questions about my attention span (the “right” answers were knee-slappingly clear), the briefest physical I’ve ever encountered, and handing over two hundred dollars in cash, I walked out the door with 900 milligrams of generic adderall.

The super-jolt of energy novice users experience mellows after a few days of use and changes character dramatically. It does become a very sufficient coffee replacement: a little ritual combined with chemical stimulation that motivates you to get out of bed. But coming up daily on Adderall has less to do with a caffeinated sensation than it does with becoming a detail-oriented post-human, a machine following self-imposed routines with little regard for anything outside the routine’s scope. It turns out that my Adderall self has a knack for accounting, spreadsheets, and administrative tasks that my unstimulated self would normally shy away from: an inbox-zeroing robot bent on eking out every last ounce of productivity my heightened senses could spit out. Keeping up with the moving parts of being self-employed, as I am, is easy on Adderall. It feels almost robotic, as if I’m hiring an assistant to take care of the books. But an Adderall prescription is much cheaper than hiring a competent assistant, and I always know I can trust myself (even if it is a different version of myself) to keep it honest when it comes to my bottom line.

There is an issue of time here as well. As someone in the content generation industry, my normal self’s most valuable asset is creativity: producing product that others will pay, in one way or another, to consume. Transforming into an Administrative Jekyll for a certain amount of time every day limits the amount of time my Creative Hyde can come up with content to market and sell. Luckily, amphetamines have that problem tackled as well: when you’re using them, you don’t have to sleep… at all. That frees up quite a few hours of the day. Amphetamine’s extreme appetite suppressant qualities will also save time you used to spend going to the grocery store. As someone with a penchant for eating everything that’s in my field of vision (often to help me avoid doing work), this was all fine with me: I waved goodbye to expensive lunches (well, to lunch in general, actually) and to those peanut butter and Cheetos-induced pounds that normally hang out around my waistlin

Link: A Physicist on Everest: How Body and Mind Break Down at Elevation

From To The Last Breath by Francis Slakey.

At sea level, the brain tends to have a strong grip on reality. The world presents us with a situation or an image, and our synapses fire, neurons transmit, and signals race through the internal wiring of our skull creating an understanding. When the system works smoothly, we sense things as they are: hot is hot, cold is cold, and we only see two people on a mountain when there are only two people on a mountain. We don’t hear dead people speak.

Oxygen is the lubricant that keeps the brain operating as it should; blood is the vehicle that transports the oxygen around to the various needy organs. Every organ wants a share of the blood, but the body prioritizes to make sure that the most critical needs are met first. In fact, without our even realizing it, the body operates under its own Golden Rule: keep oxygen flowing to the brain, everything else is bonus. Consequently, as a climber goes up in altitude into thinner and thinner air, the body monitors oxygen needs, reroutes blood flow, and flips system controls like a train switchman.

First, your body automatically ramps up your breathing rate as you go up in altitude. By the time a climber reaches Base Camp on Everest, at eighteen thousand feet, the respiratory rate has roughly tripled. If you were taking a breath every ten seconds at sea level, then you’ll be sucking in a breath every three seconds at Base. The impact on the metabolism is striking. If all you did all day long was sit in your tent at Base Camp doing absolutely nothing but breathe, you would still burn about three thousand calories a day. Forget about the Atkins Diet or spending hours on the elliptical machine—planting your butt at Everest Base Camp is the world’s most effective weight loss program.

Thankfully, our body carefully monitors our breathing rate, even when we don’t. If your body relaxes too much while you’re sleeping and your breathing rate slips too low to satisfy the Golden Rule, then your body will snap you awake with a sharp spasmic gasp. Base Camp echoes with the sound of sudden rasping inhales throughout the night.

As a climber goes higher, the body begins to take more drastic measures to compensate for the thinning air. To preserve as much oxygen as possible for the brain, blood flow to the extremities becomes more limited. Fingers and toes start to go numb, making them dangerously susceptible to frostbite.

As a climber goes up even higher in altitude, into the so-called death zone, the dangerously thin air above 26,000 feet, there is so little oxygen available that the body makes a desperate decision: it cuts off the digestive system. The body can no longer afford to direct oxygen to the stomach to help digest food because that would divert what precious little oxygen is available away from the brain. The body will retch back up anything the climber tries to eat, even if it’s as small as an M&M.

The consequence of shutting down the digestive system is, of course, that the body can no longer take in any calories. Lacking an external fuel source, the body has no choice but to turn on itself.  It now fuels itself by burning its own muscle—the very muscle needed to climb the mountain—at a rate of about two pounds per hour.

Link: The Brain on Trial

Advances in brain science are calling into question the volition behind many criminal acts. A leading neuroscientist describes how the foundation of our criminal-justice system are beginning to crumble, and proposes a new way forward for law and order. 

The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

As our understanding of the human brain improves, juries are increasingly challenged with these sorts of questions. When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault?

I submit that this is the wrong question to be asking. The choices we make are inseparably yoked to our neural circuitry, and therefore we have no meaningful way to tease the two apart. The more we learn, the more the seemingly simple concept of blameworthiness becomes complicated, and the more the foundations of our legal system are strained.

If I seem to be heading in an uncomfortable direction—toward letting criminals off the hook—please read on, because I’m going to show the logic of a new argument, piece by piece. The upshot is that we can build a legal system more deeply informed by science, in which we will continue to take criminals off the streets, but we will customize sentencing, leverage new opportunities for rehabilitation, and structure better incentives for good behavior. Discoveries in neuroscience suggest a new way forward for law and order—one that will lead to a more cost-effective, humane, and flexible system than the one we have today. When modern brain science is laid out clearly, it is difficult to justify how our legal system can continue to function without taking what we’ve learned into account.


The Crayola-fication of the World
How we gave colors names, and it messed with our brains.
In Japan, people often refer to traffic lights as being blue in color. And this is a bit odd, because the traffic signal indicating ‘go’ in Japan is just as green as it is anywhere else in the world. So why is the color getting lost in translation? This visual conundrum has its roots in the history of language.
Blue and green are similar in hue. They sit next to each other in a rainbow, which means that, to our eyes, light can blend smoothly from blue to green or vice-versa, without going past any other color in between. Before the modern period, Japanese had just one word, Ao, for both blue and green. The wall that divides these colors hadn’t been erected as yet. As the language evolved, in the Heian period around the year 1000, something interesting happened. A new word popped into being –midori – and it described a sort of greenish end of blue. Midori was a shade of ao, it wasn’t really a new color in its own right.
One of the first fences in this color continuum came from an unlikely place – crayons. In 1917, the first crayons were imported into Japan, and they brought with them a way of dividing a seamless visual spread into neat, discrete chunks. There were different crayons for green (midori) and blue (ao), and children started to adopt these names. But the real change came during the Allied occupation of Japan after World War II, when new educational material started to circulate. In 1951, teaching guidelines for first grade teachers distinguished blue from green, and the word midori was shoehorned to fit this new purpose.
In modern Japanese, midori is the word for green, as distinct from blue. This divorce of blue and green was not without its scars. There are clues that remain in the language, that bear witness to this awkward separation. For example, in many languages the word for vegetable is synonymous with green (sabzi in Urdu literally means green-ness, and in English we say ‘eat your greens’). But in Japanese, vegetables are ao-mono, literally blue things. Green apples? They’re blue too. As are the first leaves of spring, if you go by their Japanese name. In English, the term green is sometimes used to describe a novice, someone inexperienced. In Japanese, they’re ao-kusai, literally they ‘smell of blue’. It’s as if the borders that separate colors follow a slightly different route in Japan.
And it’s not just Japanese. There are plenty of other languages that blur the lines between what we call blue and green. Many languages don’t distinguish between the two colors at all. In Vietnamese the Thai language, khiaw means green except if it refers to the sky or the sea, in which case it’s blue.  The Korean word purueda could refer to either blue or green, and the same goes for the Chinese word qīng. It’s not just East Asian languages either, this is something you see across language families. In fact, Radiolab had a fascinating recent episode on color where they talked about how there was no blue in the original Hebrew Bible, nor in all of Homer’s Illiad or Odyssey!
I find this fascinating, because it highlights a powerful idea about how we might see the world. After all, what really is a color? Just like the crayons, we’re taking something that has no natural boundaries – the frequencies of visible light – and dividing into convenient packages that we give a name.

The Crayola-fication of the World

How we gave colors names, and it messed with our brains.

In Japan, people often refer to traffic lights as being blue in color. And this is a bit odd, because the traffic signal indicating ‘go’ in Japan is just as green as it is anywhere else in the world. So why is the color getting lost in translation? This visual conundrum has its roots in the history of language.

Blue and green are similar in hue. They sit next to each other in a rainbow, which means that, to our eyes, light can blend smoothly from blue to green or vice-versa, without going past any other color in between. Before the modern period, Japanese had just one word, Ao, for both blue and green. The wall that divides these colors hadn’t been erected as yet. As the language evolved, in the Heian period around the year 1000, something interesting happened. A new word popped into being –midori – and it described a sort of greenish end of blue. Midori was a shade of ao, it wasn’t really a new color in its own right.

One of the first fences in this color continuum came from an unlikely place – crayons. In 1917, the first crayons were imported into Japan, and they brought with them a way of dividing a seamless visual spread into neat, discrete chunks. There were different crayons for green (midori) and blue (ao), and children started to adopt these names. But the real change came during the Allied occupation of Japan after World War II, when new educational material started to circulate. In 1951, teaching guidelines for first grade teachers distinguished blue from green, and the word midori was shoehorned to fit this new purpose.

In modern Japanese, midori is the word for green, as distinct from blue. This divorce of blue and green was not without its scars. There are clues that remain in the language, that bear witness to this awkward separation. For example, in many languages the word for vegetable is synonymous with green (sabzi in Urdu literally means green-ness, and in English we say ‘eat your greens’). But in Japanese, vegetables are ao-mono, literally blue things. Green apples? They’re blue too. As are the first leaves of spring, if you go by their Japanese name. In English, the term green is sometimes used to describe a novice, someone inexperienced. In Japanese, they’re ao-kusai, literally they ‘smell of blue’. It’s as if the borders that separate colors follow a slightly different route in Japan.

And it’s not just Japanese. There are plenty of other languages that blur the lines between what we call blue and green. Many languages don’t distinguish between the two colors at all. In Vietnamese the Thai language, khiaw means green except if it refers to the sky or the sea, in which case it’s blue.  The Korean word purueda could refer to either blue or green, and the same goes for the Chinese word qīng. It’s not just East Asian languages either, this is something you see across language families. In fact, Radiolab had a fascinating recent episode on color where they talked about how there was no blue in the original Hebrew Bible, nor in all of Homer’s Illiad or Odyssey!

I find this fascinating, because it highlights a powerful idea about how we might see the world. After all, what really is a color? Just like the crayons, we’re taking something that has no natural boundaries – the frequencies of visible light – and dividing into convenient packages that we give a name.

Link: Nicholas Carr on Information and Contemplative Thought

The European: Is that because of the technology’s omnipresence or rather the way we engage with it? You have described how the immersion of browsing the web can’t be compared to that of reading a book.
Carr: If you watch a person using the net, you see a kind of immersion: Often they are very oblivious to what is going on around them. But it is a very different kind of attentiveness than reading a book. In the case of a book, the technology of the printed page focuses our attention and encourages a linear type of thinking. In contrast, the internet seizes our attention only to scatter it. We are immersed because there’s a constant barrage of stimuli coming at us and we seem to be very much seduced by that kind of constantly changing patterns of visual and auditorial stimuli. When we become immersed in our gadgets, we are immersed in a series of distractions rather than a sustained, focused type of thinking.

The European: And yet one can fall down the rabbit hole of Wikipedia; spending hours going from one article to the other, clicking each link that seems interesting.
Carr: It is important to realize that it is no longer just hyperlinks: You have to think of all aspects of using the internet. There are messages coming at us through email, instant messenger, SMS, tweets etc. We are distracted by everything on the page, the various windows, the many applications running. You have to see the entire picture of how we are being stimulated. If you compare that to the placidity of a printed page, it doesn’t take long to notice that the experience of taking information from a printed page is not only different but almost the opposite from taking in information from a network-connected screen. With a page, you are shielded from distraction. We underestimate how the page encourages focussed thinking – which I don’t think is normal for human beings – whereas the screen indulges our desire to be constantly distracted.

The European: Recently, there’s been a rise in the popularity of software tools which simplify the online experience – such as Instapaper or fullscreen apps – all of which leverage the effect you described by emulating the printed page or the typewriter. They block out distractions and rather let the user stare at the plain text or the blinking cursor.
Carr: I am encouraged by services such as Instapaper, Readability or Freedom – applications that are designed to make us more attentive when using the internet. It is a good sign because it shows that some people are concerned about this and sense that they are no longer in control of their attention. Of course there’s an irony in looking for solutions in the same technology that keeps us distracted. The questions is: How broadly are these applications being used? I don’t yet see them moving into the mainstream of peoples’ online experience. There’s a tension between tools that encourage attentive thought and the reading of longer articles, and the cultural trend that everything becomes a constant stream of little bits of information through which we make sense of the world. So far, the stream metaphor is winning, but I hope that the tools for attentiveness become more broadly used. So far, we don’t really know how many people used them and in which way they do.

(Source: sunrec)

Link: A Shot to the Head

A couple of online articles have discussed whether you would be conscious of being shot in the head with the general conclusion that it is unlikely because the damage happens faster than the brain can register a conscious sensation.

While this may be true in some instances it ignores that fact that there are many ways of taking a bullet to the head.

This is studied by a field called wound ballistics and, unsurprisingly when you think about it, the wound ballistics of the head are somewhat special.

Firstly, if you get shot in the head, in this day and age, you have, on average, about a 50/50 chance of surviving. In other words, it’s important to note that not everyone dies from their injuries.

But it’s also important to note that not every bullet wound will necessarily damage brain areas essential for consciousness.

The image on the top left of this post charts the position of fatal gunshot wounds recorded in soldiers and was published in a recent study on combat fatalities.

For many reasons, including body armour and confrontation type, head wounds to soldiers are not necessary a good guide to how these will pan out in civilians, but you can see that there are many possibilities with regard to which brain areas could be affected.

In fact, you can see differences in the effect of gunshots to the head more directly from the data from Glasgow Coma Scale (GCS) ratings. A sizeable minority are conscious when they first see someone from the trauma team.

It’s also worth noting that deaths are not necessarily due to brain damage per se, blood loss is also a key factor.

An average male has about 6 litres of blood and his internal carotid artery clears about a quarter of a litre per minute at rest to supply the brain. When in a stressful situation, like, for example, being shot, that output can double.

If we need to lose about 20% of our blood to lose consciousness, our notional male could black out in just over two minutes just through having damage to his carotid. However, that’s two minutes of waiting if he’s not been knocked unconscious by the impact.

But if we’re thinking about brain damage, the extent depends on a whole range of ballistic factors – the velocity, shape, size and make-up of the bullet being key.

Link: Neuroscience: The Mind Reader

Adrian Owen still gets animated when he talks about patient 23. The patient was only 24 years old when his life was devastated by a car accident. Alive but unresponsive, he had been languishing in what neurologists refer to as a vegetative state for five years, when Owen, a neuro-scientist then at the University of Cambridge, UK, and his colleagues at the University of Liège in Belgium, put him into a functional magnetic resonance imaging (fMRI) machine and started asking him questions.

Incredibly, he provided answers.

A challenging look at the definition of consciousness in those confined to a vegetative state. fMRI studies, for all their shortcomings, have recently challenged long-held notions about brain activity in otherwise unresponsive patients, but what does that mean about their “consciousness”? At what threshold does brain activity become “life”? There’s lots of controversy about how valid Owen’s studies are, but the story of Patient 23 will certainly make you think twice. A wonderful read.

(Source: jtotheizzoe, via experialist)