Main Mind Hacks

Mind Hacks

Think for a moment about all that's happening while you read this text: how your eyes move to

center themselves on the words, how you idly scratch your arm while you're thinking, the

attention-grabbing movements, noises, and other distractions you're filtering out. How does all

this work? As one brain speaking to another, here's a secret: it isn't easy.

Year: 2004
Publisher: O'Reilly Media
Language: english
Pages: 396
ISBN 13: 9780596007799
File: PDF, 3.33 MB
Download (pdf, 3.33 MB)
I keep getting a down load error
15 February 2020 (19:24) 
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.


Year: 2004
Language: slovenian
File: EPUB, 465 KB

Kralj alkohol

Year: 1998
Language: slovenian
File: EPUB, 228 KB
Think for a moment about all that's happening while you read this text: how your eyes move to
center themselves on the words, how you idly scratch your arm while you're thinking, the
attention-grabbing movements, noises, and other distractions you're filtering out. How does all
this work? As one brain speaking to another, here's a secret: it isn't easy.
The brain is a fearsomely complex information-processing environment. Take the processing
involved in seeing, for instance. One of the tasks involved in seeing is detecting the motion in
every tiny portion of vision, in such and such a direction and at such and such a speed, and
representing that in the brain. But another task is seeing a face in the light that falls on the retina,
figuring out what emotion it's showing, and representing that concept in the brain, somehow, too.
To an extent, the brain is modular, so that should give us a way in, but it's not that clean-cut. The
processing subsystems of the brain are layered on top of one another, but their functionality
mingles rather than being organized in a distinct progression. Often the same task is performed in
many different places, in many different ways. It's not a clear mechanical system like clockwork
or like a computer program; giving the same input won't always give the same output. Automatic
and voluntary actions are highly meshed, often inextricable. Parts of vision that appear fully
isolated from conscious experience suddenly report different results if conscious expectations
The information transforms in the brain are made yet more complicated by the constraints of
history, computation, and architecture. Development over evolutionary time has made it hard for
the brain to backtrack; the structure of the brain must reflect its growth and repurposing.
Computation has to occur as fast as possiblewe're talking subsecond responsesbut there are limits
on the speed at which information can travel between physical parts of the brain. These are all
constraints to be worked with.
All of which leaves us with one question: how can we possibly start to understand what's going
Cognitive neuroscience is the study of the brain biology behind our mental functions. It is a
collection of methods (like brain scanning and computational modeling) combined with a way of
looking at psychological phenomena and discovering where, why, and how the brain makes them
happen. It is neither classic neurosciencea low-level tour of the biology of the brainnor is it what
many people think of as psychologya metaphorical exploration of human inner life; rather, it's a
view of the mind that looks at the fundamental elements and rules, acting moment by moment,
that makes up conscious experience and action.
By focusing both on the biological substrate and on the high-level phenomenon of
consciousness, we can pick apart the knot of the brain. This picking apart is why you don't need
to be a cognitive neuroscientist to reap the fruit of the field.
This book is a collection of probes into the moment-by-moment works of the brain. It's not a
textbookmore of a buffet, really. Each hack is one probe into the operation of the brain, one

small demonstration. By seeing how the brain responds, we pick up traces of the structures
present and the design decision made, learning a little bit more about how the brain is put
Simultaneously we've tried to show how there isn't a separation between the voluntary "me"
feeling of the mind and the automatic nature of the brainthe division between voluntary and
automatic behavior is more of an ebb and flow, and we wield our cognitive abilities with
unconscious flourishes and deliberate movements much as we wield, say, our hands, or a pen, or
a lathe.
In a sense, we're trying to understand the capabilities that underpin the mind. Say we understand
to what extent the holes in our vision are continually covered up or what sounds and lights
willwithout a doubtgrab our attention (and also what won't): we'll be able to design better tools,
and create better interfaces that work with the grain of our mental architecture and not against it.
We'll be able to understand ourselves a little better; know a little more, in a very real sense, about
what makes us tick.
Plus it's fun. That's the key. Cognitive neuroscience is a fairly new discipline. The journey into
the brain is newly available and an enjoyable ride. The effects we'll see are real enough, but the
explanations of why they occur are still being debated. We're taking part in the mapping of this
new territory just by playing along. Over the course of writing this book, we've spent time
noticing our own attention systems darting about the room, seen ourselves catching gestures
from people we've been talking to, and played games with the color of traffic and peripheral
vision. That's the fun bit. But we've also been gripped by the arguments in the scientific literature
and have had new insights into facets of our everyday lives, such as why some web sites are
annoying and certain others are particularly well-made. If, through this book, we've managed to
make that world a little more accessible too, then we've succeeded. And when you've had a look
around and found new ways to apply these ideas and, yes, new topics we've not touched on,
please do let us know. We're here for the ride too.

Why Mind Hacks?

The term "hacking" has a bad reputation in the media. They use it to refer to those who break
into systems or wreak havoc with computers as their weapons. Among people who write code,
though, the term "hack" refers to a "quick-and-dirty" solution to a problem, or a clever way to get
something done. And the term "hacker" is taken very much as a compliment, referring to
someone as being "creative," having the technical chops to get things done. The Hacks series is
an attempt to reclaim the word, document the good ways people are hacking, and pass the hacker
ethic of creative participation on to the uninitiated. Seeing how others approach systems and
problems is often the quickest way to learn about a new technology.
The brain, like all hidden systems, is prime territory for curious hackers. Thanks to relatively
recent developments in cognitive neuroscience, we're able to satisfy a little of that curiosity,
making educated explanations for psychological effects rather than just pointing those effects
out, throwing light on the internal workings of the brain.
Some of the hacks in this collection document the neat tricks the brain has used to get the job
done. Looking at the brain from the outside like this, it's hard not to be impressed at the way it
works. Other hacks point to quirks of our own minds that we can exploit in unexpected ways,
and that's all part of learning our way round the wrinkles in this newly exposed technology.
Mind Hacks is for people who want to know a bit more about what's going on inside own heads
and for people who are going to assemble the hacks in new ways, playing with the interface
between ourselves and the world. It's wonderfully easy to get involved. We've all got brains,
after all.

How to Use This Book

You can read this book from cover to cover if you like, but each hack stands on its own, so feel
free to browse and jump to the different sections that interest you most. If there's a prerequisite
you need to know, a cross-reference will guide you to the right hack.
We've tried out all the demonstrations in this book, so we know that for most people they work
just as we say they do; these are real phenomena. Indeed, some are surprising, and we didn't
believe they'd work until we tried them ourselves. The explanations are summaries of the current
state of knowledgeoften snapshots of debates in progress. Keep an open mind about these.
There's always the chance future research will cause us to revise our understanding.
Often, because there is so much research on each topic, we have linked to web sites, books, and
academic papers to find out more. Follow these up. They're fantastic places to explore the wider
story behind each hack, and will take you to interesting places and appear interesting
With regard to academic papers, these are bedrock of scientific knowledge. They can be hard to
get and hard to understand, but we included references to them because they are the place to go if
you really need to get to the bottom of a story (and to find the cutting edge). What's more, for
many scientists, evidence doesn't really exist until it has been published in a scientific journal.
For this to happen, the study has to be reviewed by other scientists working in the field, in a
system called peer review. Although this system has biases, and mistakes are made, it is this that
makes science a collective endeavor and provides a certain guarantee of quality.
The way journal articles are cited is quite precise, and in this book we've followed the American
Psychological Association reference style ( Each looks something like

Lettvin, J., Maturana, H., McCulloch, W., & Pitts, W. (1959). What the frog's eye tells
the frog's brain. Proceedings of the IRE, 47(11), 1940-1951.

Before the year of publication (which is in parentheses), the authors are listed. After the year is
the title of the paper, followed by the journal in which you'll find it, in italics. The volume (in
italics) and then the issue number (in parentheses) follow. Page numbers come last. (There's a
crib sheet online: One convention
you'll often see in the text is "et al." after the main author of a paper. This is shorthand for "and
Many, but not all, journals have an electronic edition, and some you can access for free. Most are
subscription-based, although some publishers will let you pay per paper. If you go to a library,
generally a university library, make sure it not only subscribes to the journal you want, but also
has the year in which the paper you're after was published.

If you're lucky, the paper will also be reprinted online. This is often the case with classic papers
and with recent papers, which the authors may have put on their publications page. A good query
to use at Google ( for papers online in PDF format using a query like:
"What the Frog's Eye Tells the Frog's Brain" filetype:pdf

Alternately, search for a researcher's name followed by the word "publications" for papers,
demonstrations, and as-yet-unpublished research, a gold mine if you're learning more about a
particular topic.
Recommended Reading

If you're interested in getting a general overview, rather than chasing the details of a particular
story, you might like to start by reading a book on the subject. Here are some of our favorite
books on our own pet topics, all of which make specialist material accessible for the rest of us:






Descartes' Baby: How the Science of Child Development Explains What Makes Us
Human by Paul Bloom (2004). Lively speculation from a leading researcher.
Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence by
Andy Clark (2003). Clark asks whether intelligence is bounded by our skulls or is part of
the tools and technologies we use.
Symbolic Species: The Co-Evolution of Language and the Brain by Terrence Deacon
(1997). A dizzying, provocative integration of information across different disciplines.
Consciousness Explained by Daniel Dennett (1991). Psychologically informed
philosophy. Consciousness isn't explained by the end, but it's a fun ride along the way.
Eye and Brain: The Psychology of Seeing by Richard Gregory (1966). Erudite and goodhumoreda classic introduction to vision.
The Nurture Assumption: Why Children Turn Out the Way They Do by Judith Rich
Harris (1998). The Evolutionary Psychology of child development, a great read that
challenges the assumption that parents are the most important influence in a child's life.
See also the web site at:
Mind Wide Open: Your Brain and the Neuroscience of Everyday Life by Steven Johnson
(2004). How the latest developments in brain science and technology inform our
individual self-understanding.
The Language Instinct: How the Mind Creates Language by Steven Pinker (1995).
Compelling argument for our innate language ability and brain structure being reflected
in each other.
Phantoms in the Brain: Probing the Mysteries of the Human Mind by V. S.
Ramachandran & Sandra Blakeslee (1998). Tales of what brain injury can tell us about
the way the brain works.
The Man Who Mistook His Wife for a Hat and Other Clinical Tales by Oliver Sacks
(1995). Informative and humane anecdotes about patients with different kinds of brain

If you're looking for something a little deeper, we recommend you try:



The Oxford Companion to the Mind, edited by Richard Gregory (1999). Authoritative
and entertaining collection of essays on all aspects of the brain.
Godel, Escher, Bach: an Eternal Golden Braid by Douglas Hofstadter (1979). The classic
exploration of minds, machines, and the mathematics of self-reference. The back of my
copy rightly says "a workout in the finest mental gymnasium in town."
How to Think Straight About Psychology by Keith Stanovich (1997). How to apply
critical thinking to psychological topics.

Got a Hack?

To explore Hacks books online or to contribute a hack for future titles, visit:

How This Book Is Organized

The book is divided into 10 chapters, organized by subject:

Chapter 1, Inside the Brain

The question is not just "How do we look inside the brain?" but "How do we talk about
what's there once we can see it?" There are a number of ways to get an idea about how
your brain is structured (from measuring responses on the outside to taking pictures of the
inside)that's half of this chapter. The other half speaks to the second question: we'll take
in some of the sights, check out the landmarks, and explore the geography of the brain.

Chapter 2, Seeing

The visual system runs all the way from the way we move our eyes to how we
reconstruct and see movement from raw images. Sight's an important sense to us; it's high
bandwidth and works over long distances (unlike, say, touch), and that's reflected in the
size of this chapter.

Chapter 3, Attention

One of the mechanisms we use to filter information before it reaches conscious
awareness is attention. Attention is sometimes voluntary (you can pay attention) and
sometimes automatic (things can be attention-grabbing)here we're looking at what it does
and some of its limitations.

Chapter 4, Hearing and Language

Sounds usually correspond to events; a noise usually means something's just happened.
We'll have a look at what our ears are good for, then move on to language and some of
the ways we find meaning in words and sentences.

Chapter 5, Integrating

It's rare we operate using just a single sense; we make full use of as much information as
we can find, integrating sight, touch, our propensity for language, and other inputs. When

senses agree, our perception of the world is sharper. We'll look at how we mix up modes
of operating (and how we can't help doing so, even when we don't mean to) and what
happens when senses disagree.

Chapter 6, Moving

This chapter covers the bodyhow the image the brain has of our body is easy to confuse
and also how we use our body to interact with the world. There's an illusion you can walk
around, and we'll have a little look at handedness too.

Chapter 7, Reasoning

We're not built to be perfect logic machines; we're shaped to get on as well as possible in
the world. Sometimes that shows up in the kind of puzzles we're good at and the sort of
things we're duped by.

Chapter 8, Togetherness

The senses give us much to go by, to reconstruct what's going on in the universe. We
can't perceive cause and effect directly, only that two things happen at roughly the same
time in roughly the same place. The same goes for complex objects: why see a whole
person instead of a torso, head, and collection of limbs? Our reconstruction of objects and
causality follow simple principles, which we use in this chapter.

Chapter 9, Remembering

We wouldn't be human if we weren't continually learning and changing, becoming
different people. This chapter covers how learning begins at the level of memory over
very short time periods (minutes, usually). We'll also look at how a few of the ways we
learn and remember manifest themselves.

Chapter 10, Other People

Other people are a fairly special part of our environment, and it's fair to say our brains
have special ways of dealing with them. We're great at reading emotions, and we're even
better at mimicking emotions and other people in generalso good we often can't help it.
We'll cover both of those.

1.1. Hacks 1-12

It's never entirely true to say, "This bit of the brain is solely responsible for function X." Take the
visual system [Hack #13], for instance; it runs through many varied parts of the brain with no
single area solely responsible for all of vision. Vision is made up of lots of different
subfunctions, many of which will be compensated for if areas become unavailable. With some
types of brain damage, it's possible to still be able to see, but not be able to figure out what's
moving or maybe not be able to see what color things are.
What we can do is look at which parts of the brain are active while it is performing a particular
taskanything from recognizing a face to playing the pianoand make some assertions. We can
provide input and see what output we getthe black box approach to the study of mind. Or we can
work from the outside in, figuring out which abilities people with certain types of damaged
brains lack.
The latter, part of neuropsychology [Hack #6], is an important tool for psychologists. Small,
isolated strokes can deactivate very specific brain regions, and also (though more rarely)
accidents can damage small parts of the brain. Seeing what these people can no longer do in
these pathological cases, provides good clues into the functions of those regions of the brain.
Animal experimentation, purposely removing pieces of the brain to see what happens, is another.
These are, however, pathology-based methodsless invasive techniques are available. Careful
experimentationmeasuring response types, reaction times, and response changes to certain
stimuli over timeis one such alternative. That's cognitive psychology [Hack #1], the science of
making deductions about the structure of the brain through reverse engineering from the outside.
It has a distinguished history. More recently we've been able to go one step further. Pairing
techniques from cognitive psychology with imaging methods and stimulation techniques
[Hack#2] through [Hack#5], we can manipulate and look at the brain from the outside, without
having to, say, remove the skull and pull a bit of the cerebrum out. These imaging methods are
so important and referred to so much in the rest of this book, we've provided an overview and
short explanation for some of the most common techniques in this chapter.
In order that the rest of the book make sense, after looking at the various neuroscience
techniques, we take a short tour round the central nervous system [Hack #7], from the spine, to
the brain [Hack #8], and then down to the individual neuron [Hack #9] itself. But what we're
really interested in is how the biology manifests in everyday life. What does it really mean for
our decision-making systems to be assembled from neurons rather than, well, silicon, like a
computer? What it means is that we're not software running on hardware. The two are one and
the same, the physical properties of our mental substrate continually leaking into everyday life:
the telltale sign of our neurons is evident when we respond faster to brighter lights [Hack #11],
and our biological roots show through when blood flow has to increase because we're thinking so
hard [Hack #10] .
And finally take a gander at a picture of the body your brain thinks you have and get in touch
with your inner sensory homunculus [Hack #12] .

Hack 1. Find Out How the Brain Works Without Looking Inside

How do you tell what's inside a black box without looking in it? This is the challenge the mind
presents to cognitive psychology.
Cognitive psychology is the psychology of the basic mental processesthings like perception,
attention, memory, language, decision-making. It asks the question, "What are the fundamental
operations on which mind is based?"
The problem is, although you can measure what goes into someone's head (the input) and
measure roughly what they do (the output), this doesn't tell you anything about what goes on in
between. It's a black box, a classic reverse engineering problem. 1 How can we figure out how it
works without looking at the code?
These days, of course, we can use neuroimaging (like EEG [Hack 2], PET [Hack #3], and fMRI
[Hack #4]) to look inside the head at the brain, or use information on anatomy and information
from brain-damaged individuals [Hack #6] to inform how we think the brain runs the algorithms
that make up the mind. But this kind of work hasn't always been possible, and it's never been
easy or cheap. Experimental psychologists have spent more than a hundred years refining
methods for getting insight into how the mind works without messing with the insides, and these
days we call this cognitive psychology.
There's an example of a cognitive psychology-style solution in another book from the hacks
series, Google Hacks ( Google obviously doesn't give
access to the algorithms that run its searches, so the authors of Google Hacks, Tara Calishain and
Rael Dornfest, were forced to do a little experimentation to try and work it out. Obviously, if you
put in two words, Google returns pages that feature both words. But does the order matter?
Here's an experiment. Search Google for "reverse engineering" and then search for "engineering
reverse." The results are different; in fact, they are sometimes different even when searching for
words that aren't normally taken together as some form of phrase. So we might conclude that
order does make a difference; in some way, the Google search algorithm takes into account the
order. If you try to whittle a search down to the right terms, something that returned only a
couple of hits, perhaps over time you could figure out more exactly how the order mattered.
This is basically what cognitive psychology tries to do, reverse engineering the basic functions of
the mind by manipulating the inputs and looking at the results. The inputs are often highly
restricted situations in which people are asked to make judgments or responses in different kinds
of situations. How many words from the list you learned yesterday can you still remember? How
many red dots are there? Press a key when you see an X appear on the screen. That sort of thing.
The speed at which they respond, the number of errors, or the patterns of recall or success tell us
something about the information our cognitive processes use, and how they use it.

A few things make reverse engineering the brain harder than reverse engineering software,
Biological systems are often complex, sometimes even chaotic (in the technical sense). This
means that there isn't necessarily a one-to-one correspondence in how a change in input affects
output. In a logic-based or linear system, we can clearly see causes and effects. The mind,
however, doesn't have this orderly mapping. Small things have big effects and sometime big
changes in circumstance can produce little obvious difference in how we respond. Biological
functionsincluding cognitionare often supported by multiple processes. This means they are
robust to changes in just one supporting process, but it also means that they don't always respond
how you would have thought when you try and influence them.
People also aren't consistent in the same way software or machines usually are. Two sources of
variability are noise and learning. We don't automatically respond in the same way to the same
stimulus every time. This sometimes happens for no apparent reason, and we call this
randomness noise. But sometimes our responses change for a reason, not because of noise, and
that's because the very act of responding first time around creates feedback that informs our
response pattern for the next time (for example, when you get a new bike, you're cautious with
your stopping distance at first, but each time you have to stop suddenly, you're better informed
about how to handle the braking next time around). Almost all actions affect future processing,
so psychologists make sure that if they are testing someone the test subject has either done the
thing in question many times before, and hence stopped changing his response to it, or he has
never done it before.
Another problem with trying to guess how the mind works is that you can't trust people when
they offer their opinion on why they did something or how they did it. At the beginning of the
twentieth century, psychology relied heavily on introspection and the confusion generated led to
the movement that dominated psychology until the '70s: behaviorism. Behaviorism insisted that
we treat only what we can reliably measure as part of psychology and excluded all reference to
internal structures. In effect we were to pretend that psychology was just the study of how
stimuli were linked to outputs. This made psychology much more rigorous experimentally
(although some would argue less interesting). Psychology today recognizes the need to posit
mind as more than simple stimulus-response matching, although cognitive psychologists retain
the behaviorists' wariness of introspection. For cognitive psychologists, why you think you did
something is just another bit of data, no more privileged than anything else they've measured,
and no more likely to be right.2
Cognitive psychology takes us a long way. Many phenomena discovered by cognitive and
experimental psychology are covered in this bookthings like the attentional blink [Hack #39] and
state-dependent recall [Hack #87] . The rigor and precision of the methods developed by
cognitive psychology are still vital, but now they can be used in tandem with methods that give
insight into the underlying brain structure and processes that are supporting the phenomenon
being investigated.

1.2.1. End Notes

1. Daniel Dennett has written a brief essay called "Cognitive Science as Reverse
Engineering" ( in which he discusses the
philosophy of this approach to mind.
2. A psychologist called Daryl Bem formalized this in "self-perception theory." He said
"Individuals come to know their own attitudes, emotions and internal states by inferring
them from observations of their own behavior and circumstances in which they occur.
When internal cues are weak, ambiguous, or uninterpretable, the individual is in the same
position as the outside observer." Bem, D. J., "Self Perception Theory." In L. Berkowitz
(ed.), Advances in Experimental Social Psychology, volume 6 (1972).

Hack 2. Electroencephalogram: Getting the Big Picture with EEGs

EEGs give you an overall picture of the timing of brain activity but without any fine detail.
An electroencephalogram (EEG) produces a map of the electrical activity on the surface of the
brain. Fortunately, the surface is often what we're interested in, as the cortexresponsible for our
complex, high-level functionsis a thin sheet of cells on the brain's outer layer. Broadly, different
areas contribute to different abilities, so one particular area might be associated with grammar,
another with motion detection. Neurons send signals to one another using electrical impulses, so
we can get a good measure of the activity of the neurons (how busy they are doing the work of
processing) by measuring the electromagnetic field nearby. Electrodes outside the skull on the
surface of the skin are close enough to take readings of these electromagnetic fields.
Small metal disks are evenly placed on the head, held on by a conducting gel. The range can vary
from two to a hundred or so electrodes, all taking readings simultaneously. The output can be a
simple graph of signals recorded at each electrode or visualised as a map of the brain with
activity called out.
1.3.1. r
P os


The EEG technique is well understood and has been in use for many decades. Patterns of
electrical activity corresponding to different states are now well-known: sleep, epilepsy,
or how the visual cortex responds when the eyes are in use. It is from EEG that we get the
concepts of alpha, beta, and gamma waves, related to three kinds of characteristic
oscillations in the signal.
Great time resolution. A reading of electrical activity can be taken every few
milliseconds, so the brain's response to stimuli can be precisely plotted.
Relatively cheap. Home kits are readily available. OpenEEG
(, EEG for the rest of us, is a project to develop low-cost
EEG devices, both hardware and software.

1.3.2. Cons

Poor spatial resolution. You can take only as many readings in space as electrodes you
attach (up to 100, although 40 is common). Even if you are recording from many
locations, the electrical signals from the scalp don't give precise information on where
they originate in the brain. You are getting only information from the surface of the skull
and cannot perfectly infer what and where the brain activity was that generated the
signals. In effect this means that it's useful for looking at overall activity or activity in
regions no more precise than an inch or so across.

Hack 3. Positron Emission Tomography: Measuring Activity Indirectly with PET

PET is a radioactivity-based technique to build a detailed 3D model of the brain and its activity.
Positron emission tomography (PET) is more invasive than any of the other imaging techniques.
It requires getting a radioactive chemical into the bloodstream (by injection) and watching for
where in the brain the radioactivity ends upthe "positron emission" of the name. The level of
radioactivity is not dangerous, but this technique should not be used on the same person on a
regular basis.
When neurons fire to send a signal to other neurons, they metabolize more energy. A few
seconds later, fresh blood carrying more oxygen and glucose is carried to the region. Using a
radioactive isotope of water, the amount of blood flow to each brain location can be monitored,
and the active areas of the brain that require a lot of energy and therefore blood flow can be
1.4.1. r
P os

A PET scan will produce a 3D model of brain activity.

1.4.2. Cons

Scans have to take place in bulky, expensive machinery, which contain the entire body.
PET requires injecting the subject with a radioactive chemical.
Although the resolution of images has improved over the last 30 years, PET still doesn't
produce as fine detail as other techniques (it can see activity about 1 cm across).
PET isn't good for looking at how brain activity changes over time. A snapshot can take
minutes to be assembled.

Hack 4. Functional Magnetic Resonance Imaging: The State of the Art

fMRI produces high-resolution animations of the brain in action.
Functional magnetic resonance imaging (fMRI) is the king of brain imaging. Magnetic
resonance imaging is noninvasive and has no known side effectsexcept, for some,
claustrophobia. Having an MRI scan requires you to lie inside a large electromagnet in order to
be exposed to the high magnetic field necessary. It's a bit like being slid inside a large white
coffin. It gets pretty noisy too.
The magnetic field pushes the hydrogen atoms in your brain into a state in which they all "line
up" and spin at the same frequency. A radio frequency pulse is applied at this exact frequency,
making the molecules "resonate" and then emit radio waves as they lose energy and return to
"normal." The signal emitted depends on what type of tissue the molecule is in. By recording
these signals, a 3D map of the anatomy of the brain is built up.
MRI isn't a new technology (it's been possible since the '70s), but it's been applied to psychology
with BOLD functional MRI (abbreviated to fMRI) only as recently as 1992. To obtain functional
images of the brain, BOLD (blood oxygen level dependent) fMRI utilizes the fact that
deoxygenated blood is magnetic (because of the iron in hemoglobin) and therefore makes the
MRI image darker. When neurons become active, fresh blood washes away the deoxygenated
blood in the precise regions of the brain that have been more active than usual.
While structural MRI can take a long time, fMRI can take a snapshot of activity over the whole
brain every couple of seconds, and the resolution is still higher than with PET [Hack #3]. It can
view activity in volumes of the brain only 2 mm across and build a whole map of the brain from
that. For a particular experiment, a series of fMRI snapshots will be animated over a single highresolution MRI scan, and experimenters can see in exactly which brain areas activity is taking
Much of the cognitive neuroscience research done now uses fMRI. It's a method that is still
developing and improving, but already producing great results.
1.5.1. r
P os

High spatial resolution and good enough time resolution to look at changing patterns of
activity. While not able to look at the changing brain as easily as EEG [Hack #2], its far
greater spatial resolution means fMRI is suitable for looking at which parts of the brain
are active in the process of recalling a fact, for example, or seeing a face.

1.5.2. Cons

Bulky, highly magnetic, and very expensive machinery.
fMRI is still new. It's a complex technique requiring computing power and a highly
skilled team with good knowledge both of physics and of the brain.

Hack 5. Transcranial Magnetic Stimulation: Turn On and Off Bits of the Brain

Stimulate or suppress specific regions of the brain, then sit back and see what happens.
Transcranial magnetic stimulation (TMS) isn't an imaging technique like EEG [Hack 2] or fMRI
[Hack #4], but it can be used along with them. TMS uses a magnetic pulse or oscillating
magnetic fields to temporarily induce or suppress electrical activity in the brain. It doesn't require
large machines, just a small device around the head, andso far as we knowit's harmless with no
Neurons communicate using electrical pulses, so being able to produce electrical activity
artificially has its advantages. Selected regions can be excited or suppressed, causing
hallucinations or partial blindness if some part of the visual cortex is being targeted. Both uses
help discover what specific parts of the brain are for. If the subject experiences a muscle
twitching, the TMS has probably stimulated some motor control neurons, and causing
hallucinations at different points in the visual system can be used to discover the order of
processing (it has been used to discover where vision is cut out during saccades [Hack #17], for
Preventing a region from responding is also useful: if shutting down neurons in a particular area
of the cortex stops the subject from recognizing motion, that's a good clue as to the function of
that area. This kind of discovery was possible before only by finding people with localized brain
damage; now TMS allows more structured experiments to take place.
Coupled with brain imaging techniques, it's possible to see the brain's response to a magnetic
pulse ripple through connected areas, revealing its structure.
1.6.1. r
P os

Affects neural activity directly, rather than just measuring it.

1.6.2. Cons

Apparently harmless, although it's still early days.

1.6.3. eS e Also

"Savant For a Day" by Lawrence Osbourne
( or, an alternative URL), an article in
the New York Times, which describes Lawrence Osborne's experience of TMS, having
higher-level functions of his brain suppressed, and a different type of intelligence

Hack 6. Neuropsychology, the 10% Myth, and Why You Use All of Your Brain

Neuropsychology is the study of what different parts of the brain do by studying people who no
longer have those parts. As well as being the oldest technique of cognitive neuroscience, it
refutes the oft-repeated myth that we only use 10% of our brains.
Of the many unscientific nuggets of wisdom about the brain that many people believe, the most
common may be the "fact" that we use only 10% of our brains.
In a recent survey of people in Rio de Janeiro with at least a college education, approximately
half stated that the 10% myth was true. 1 There is no reason to suppose the results of a similar
survey conducted anywhere else in the world would be radically different. It's not surprising that
a lot of people believe this myth, given how often it is claimed to be true. Its continued
popularity has prompted one author to state that the myth has "a shelf life longer than lacquered
Where does this rather popular belief come from?
It's hard to find out how the myth started. Some people say that something like it was said by
Einstein, but there isn't any proof. The idea that we have lots of spare capacity is certainly true
and fits with our aspirational culture, as well as with the Freudian notion that the mind is mostly
unconscious. Indeed, the myth was being used to peddle self-help literature as early as 1929.3
The neatness and numerological potency of the 10% figure is a further factor in the endurance of
the myth.
Neuropsychology is the study of patients who have suffered brain damage and the psychological
consequences of that brain damage. As well as being a vital source of information about which
bits of the brain are involved in doing which things, neuropsychology also provides a neat
refutation of the 10% myth: if we use only 10% of our brains, which bits would you be happy to
lose? From neuropsychology, we know that losing any bit of the brain causes you to stop being
able to do something or being able to do it so well. It's all being used, not just 10% of it.
Admittedly we aren't clear on exactly what each bit of the brain does, but that doesn't mean that
you can do without 90% of it.
Neuropsychology has other uses aside from disproving unhelpful but popularly held trivia. By
looking at which psychological functions remain after the loss of a certain brain region, we can
tell what brain regions are and are not necessary for us to do different things. We can also see
how functions group and divide by looking at whether they are always lost together or lost only
in dissimilar cases of brain damage. Two of the famous early discoveries of neuropsychology are
two distinct language processing regions in the brain. Broca's area (named after the

neuropsychologist Paul Broca) is in the frontal lobe and supports understanding and producing
structure in language. Those with damage to Broca's area speak in stilted, single words.
Wernicke's area (on the junction between the temporal and parietal lobes and named after Carl
Wernicke) supports producing and understanding the semantics of language. People with brain
damage to Wernicke's area can produce grammatically correct sentences, but often with little or
no meaning, an incomprehensible "word salad."
Another line of evidence against the 10% myth is brain imaging research [[Hack#2] through
[Hack#4]], which has grown exponentially in the last couple of decades. Such techniques allow
the increased blood flow to be measured in certain brain regions during the performance of
cognitive tasks. While debate continues about the degree to which it is sensible to infer much
about functional localization from imaging studies, one thing they make abundantly clear is that
there are no areas of the brain that are "black holes"areas that never "light up" in response to
some task or other. Indeed, the neurons that comprise the cortex of the brain are active to some
degree all the time, even during sleep.
A third line of argument is that of evolutionary theory. The human brain is a very expensive
organ, requiring approximately 20% of blood flow from the heart and a similar amount of
available oxygen, despite accounting for only 2% of body weight. The evolutionary argument is
straightforward: is it really plausible that such a demanding organ would be so inefficient as to
have spare capacity 10 times greater than the areas being usefully employed?
Fourth, developmental studies indicate that neurons that are not employed early in life are likely
never to recover and behave normally. For example, if the visual system is not provided with
light and stimulation within a fairly narrow developmental window, the neurons atrophy and
vision never develops. If the visual system is deprived of a specific kind of stimulation, such as
vertical lines, it develops without any sensitivity to that kind of stimulus. Functions in other parts
of the brain similarly rely on activation to develop normally. If there really were a large
proportion of neurons that were not used but were instead lying in wait, likely they would be
useless by puberty.
It can be seen, then, that the 10% myth simply doesn't stand up to critical thinking. Two factors
complicate the picture slightly, however; both have been used to muddy the waters around the
claim at some stage.
First, people who suffer hydrocephalus in childhood have been seen to have large "holes" in the
middle of their brains and yet function normally (the holes are fluid-filled ventricles that are
present in every brain but are greatly enlarged in hydrocephalus). This condition has been the
focus of sensationalist television documentaries, the thrust of which is that we can get on
perfectly well without much of our brains. Such claims are willfully misleadingwhat such
examples actually show is the remarkable capacity of the brain to assign functioning to
alternative areas if there are problems with the "standard" areas during a specific time-point in
development. Such "neuronal plasticity," as it is known, is not seen following brain damage
acquired in adulthood. As discussed earlier, development of the brain depends on activitythis
same fact explains why hydrocephalitic brains can function normally and makes having an
unused 90% extremely unlikely.

Second, there is actually a very disingenuous sense in which we do "use" only 10% of our brains.
The glial cells of the brain outnumber the neurons by a factor of roughly 10 to 1. Glial cells play
a supporting role to the neurons, which are the cells that carry the electrochemical signals of the
brain. It is possible, therefore, to note that only approximately 10% of the cells of the cortex are
directly involved in cognition.
This isn't what proponents of the 10% theory are referring to, however. Instead, the myth is
almost always a claim about mind, not brain. The claim is analogous to arguing that we operate
at only 10% of our potential (although "potential" is so immeasurable a thing, it is misleading
from the start to throw precise percentages around).
Uri Geller makes explicit the "untapped potential" interpretation in the introduction to Uri
Geller's Mind-Power Book:
Our minds are capable of remarkable, incredible feats, yet we don't use them to their full
capacity. In fact, most of us only use about 10 per cent of our brains, if that. The other 90 per
cent is full of untapped potential and undiscovered abilities, which means our minds are only
operating in a very limited way instead of at full stretch.
The confusion between brain and mind blurs the issue, while lending the claim an air of
scientific credibility because it talks about the physical brain rather than the unknowable mind.
But it's just not true that 90% of the brain's capacity is just sitting there unused. It is true that our
brains adjust their function according to experience [Hack #12] good news for the patients
studied by neuropsychology. Many of them recover some of the ability they have lost. It is also
true that the brain can survive a surprisingly large amount of damage and still sort of work
(compare pouring two pints of beer down your throat and two pints of beer into your computer's
hard disk drive for an illustration of the brain's superior resistance to insults). But neither of these
facts mean that you have exactly 90% of untapped potentialyou need all your brain's plasticity
and resistance to insult to keep learning and functioning across your life span.
In summary, the 10% myth isn't true, but it does offer an intuitively seductive promise of the
possibility of self-improvement. It has been around for at least 80 years, and despite having no
basis in current scientific knowledge and being refuted by at least 150 years of neuropsychology,
it is likely to exist for as long as people are keen to aspire to be something more than they are.
1.7.1. End Notes

1. Herculano-Houzel, S. (2002). Do you know your brain? A survey on public neuroscience
literacy at the closing of the decade of the brain. The Neuroscientist 8, 98-110.
2. Radford, B. (1999). The ten-percent myth. Skeptical Inquirer. March-April
3. You can read all about the 10% myth in Beyerstein, B. L. (1999), Whence cometh the
myth that we only use 10% of our brains? In Della Sala (ed.), Mind MythsExploring
Popular Assumptions About the Mind and Brain. New York: John Wiley and Sons, 4-24,
at (, and in these two

online essays by Eric Chudler, "Do We Use Only 10% of Our Brain?"
( and "Myths About the Brain: 10
Percent and Counting" (

Hack 7. Get Acquainted with the Central Nervous System

Take a brief tour around the spinal cord and brain. What's where, and what does what?
Think of the central nervous system like a mushroom with the spinal cord as the stalk and the
brain as the cap. Most of the hacks in this book arise from features in the cortex, the highly
interconnected cells that make a thin layer over the brain...but not all. So let's start outside the
brain itself and work back in.
Senses and muscles all over the body are connected to nerves, bundles of neurons that carry
signals back and forth. Neurons come in many types, but they're basically the same wherever
they're found in the body; they carry electric current and can act as relays, passing on
information from one neuron to the next. That's how information is carried from the sensory
surface of the skin, as electric signals, and also how muscles are told to move, by information
going the other way.
Nerves at this point run to the spinal cord two by two. One of each pair of nerves is for receptors
(a sense of touch for instance) and one for effectorsthese trigger actions in muscles and glands.
At the spinal cord, there's no real intelligence yet but already some decision-makingsuch as the
withdrawal reflexoccurs. Urgent signals, like a strong sense of heat, can trigger an effector
response (such as moving a muscle) before that signal even reaches the brain.
The spinal cord acts as a conduit for nerve impulses up and down the body: sensory impulses
travel up to the brain, and the motor areas of the brain send signals back down again. Inside the
cord, the signals converge into 31 pairs of nerves (sensory and motor again), and eventually, at
the top of the neck, these meet the brain.
At about the level of your mouth, right in the center of your head, the bundles of neurons in the
spinal cord meet the brain proper. This tip of the spinal cord, called the brain stem, continues like
a thick carrot up to the direct center of your brain, at about the same height as your eyes.
This, with some other central regions, is known as the hindbrain. Working outward from the
brain stem, the other large parts of the brain are the cerebellum, which runs behind the soft area
you can feel at the lower back of your head, and the forebrain, which is almost all the rest and
includes the cortex.
Hindbrain activities are mostly automatic: breathing, the heartbeat, and the regulation of the
blood supply.
The cerebellum is old brainalmost as if it were evolution's first go at performing higher-brain
functions, coordinating the senses and movement. It plays an important role in learning and also
in motor control: removing the cerebellum produces characteristic jerky movements. The

cerebellum takes input from the eyes and ears, as well as the balance system, and sends motor
signals to the brain stem.
Sitting atop the hindbrain is the midbrain, which is small in humans but much larger in animals
like bats. For bats, this corresponds to a relay station for auditory informationbats make
extensive use of their ears. For us, the midbrain acts as a connection layer, penetrating deep into
the forebrain (where our higher-level functions are) and connecting back to the brain stem. It acts
partially to control movement, linking parts of the higher brain to motor neurons and partially as
a hub for some of the nerves that don't travel up the spinal cord but instead come directly into the
brain: eye movement is one such function.
Now we're almost at the end of our journey. The forebrain, also known as the cerebrum, is the
bulbous mass divided into two great hemispheresit's the distinctive image of the brain that we all
know. Buried in the cerebrum, right in the middle where it surrounds the tip of the brain stem
and midbrain, there's the limbic system and other primitive systems. The limbic system is
involved in essential and automatic responses like emotions, and includes the very tip of the
temporal cortex, the hippocampus and the amygdala, and, by some reckonings, the
hypothalamus. In some animals, like reptiles, this is all there is of the forebrain. For them, it's a
sophisticated olfactory system: smell is analyzed here, and behavioral responses like feeding and
fighting are triggered.
Neuroscientist joke: the hypothalamus regulates the four essential Fs of life: fighting, fleeing,
feeding, and mating.
For us humans, the limbic system has been repurposed. It still deals with smell, but the
hippocampus, for exampleone part of the systemis now heavily involved in long-term memory
and learning. And there are still routing systems that take sensory input (from everywhere but the
nose, which is routed directly to the limbic system), and distribute it all over the forebrain.
Signals can come in from the rest of the cerebrum and activate or modulate limbic system
processing common to all animalsthings like emotional arousal. The difference, for us humans, is
that the rest of the cerebrum is so large. The cap of the mushroom consists of four large lobes on
each hemisphere, visible when you look at the picture of the brain. Taken together, they make up
90% of the weight of the brain. And spread like a folded blanket over the whole of it is the layer
of massively interconnected neurons that is the cerebral cortex, and if any development can be
said to be responsible for the distinctiveness of humanity, this is it. For more on what functions
the cerebral cortex performs, read [Hack #8].
As an orienting guide, it's useful to have a little of the jargon as well as the map of the central
nervous system. Described earlier are the regions of the brain based mainly on how they grow
and what the brain looks like. There are also functional descriptions, like the visual system [Hack
#13], that cross all these regions. They're mainly self-explanatory, as long as you remember that
functions tend to be both regions in the brain and pathways that connect areas together.

There are also positional descriptions, which describe the brain geographically and seem
confusing on first encounter. They're often used, so it's handy to have a crib, as shown in Figure
Figure 1-1. Common labels used to specify particular parts of neuroanatomical areas

These terms are used to describe direction within the brain and prefix the Latin names of the
particular region they're used with (e.g., posterior occipital cortex means the back of the occipital
Unfortunately, a number of different schemes are used to name the subsections of the brain, and
they don't always agree on where the boundaries of the different regions are. Analogous regions
in different species may have different names. Different subdisciplines use different schemes and
conventions too. A neuropsychologist might say "Broca's areas," while a neuroanatomist might
say "Brodman areas 44, 45, and 46"but they are both referring to the same thing. "Cortex" is also
"neocortex" is also "cerebrum." The analogous area in the rat is the forebrain. You get the
picture. Add to this the fact that many regions have subdivisions (the somatosensory cortex is in
the parietal lobe, which is in the neocortex, for example) and some subdivisions can be put by
different people in different supercategories, and it can get very confusing.
1.8.1. eS e Also


Three excellent online resources for exploring neuroanatomy are Brain Info
(, The Navigable Atlas of the Human
Brain (, and The Whole Brain Atlas
The Brain Museum ( houses lots of beautifully taken pictures of
the brains from more than 175 different species.


BrainVoyager (, which makes software for processing
fMRI data, is kind enough to provide a free program that lets you explore the brain in 3D.
Nolte, J. (1999). The Human Brain: An Introduction to Its Functional Anatomy.
Crossman, A. R., & Neary, D. (2000). Neuroanatomy: An Illustrated Colour Text.

Hack 8. Tour the Cortex and the Four Lobes

The forebrain, the classic image of the brain we know from pictures, is the part of the brain that
defines human uniqueness. It consists of four lobes and a thin layer on the surface called the
When you look at pictures of the human brain, the main thing you see is the rounded, wrinkled
bulk of the brain. This is the cerebrum, and it caps off the rest of the brain and central nervous
system [Hack #7].
To find your way around the cerebrum, you need to know only a few things. It's divided into two
hemispheres, left and right. It's also divided into four lobes (large areas demarcated by
particularly deep wrinkles). The wrinkles you can see on the outside are actually folds: the
cerebrum is a very large folded-up surface, which is why it's so deep. Unfolded, this surfacethe
cerebral cortexwould be about 1.5 m2 (a square roughly 50 inches on the side), and between 2
and 4 mm deep. It's not thick, but there's a lot of it and this is where all the work takes place. The
outermost part, the top of the surface, is gray matter, the actual neurons themselves. Under a few
layers of these is the white matter, the fibers connecting the neurons together. The cortex is
special because it's mainly where our high-level, human functions take place. It's here that
information is integrated and combined from the other regions of the brain and used to modulate
more basic functions elsewhere in the brain. The folds exist to allow many more neurons and
connections than other animals have in a similar size area.
1.9.1. Cerebrla Lobes

The four cerebral lobes generally perform certain classes of function.
You can cover the frontal lobe if you put your palms on your forehead with your fingers pointing
up. It's heavily involved in planning, socializing, language, and general control and supervision
of the rest of the brain.
The parietal lobe is at the top and back of your head, and if you lock your fingers together and
hook your hands over the top back, that's it covered there. It deals a lot with your senses,
combining information and representing your body and movements. The object recognition
module for visual processing [Hack #13] is located here.
You can put your hands on only the ends of the temporal lobeit's right behind the ears. It sits
behind the frontal lobe and underneath the parietal lobe and curls up the underside of the
cerebrum. Unsurprisingly, auditory processing occurs here. It deals with language too (like
verbal memory), and the left hemisphere is specialized for this (non-linguistic sound is on the
right). The curled-up ends of the temporal lobe join into the limbic system at the hippocampus
and are involved in long-term memory formation.
Finally, there's the occipital lobe, right at the back of the brain, about midway down your head.
This is the smallest lobe of the cerebrum and is where the visual cortex is located.

The two hemispheres are joined together by another structure buried underneath the lobes, called
the corpus callosum. It's the largest bundle of nerve fibers in the whole nervous system. While
sensory information, such as vision, is divided across the two hemispheres of the brain, the
corpus callosum brings the sides back together. It's heavily coated in a fatty substance called
myelin, which speeds electrical conduction along nerve cells and is so efficient that the two sides
of the visual cortex (for example) operate together almost as if they're adjacent. Not bad
considering the corpus callosum is connecting together brain areas a few inches apart when the
cells are usually separated by only a millimeter or two.
1.9.2. Cerebrla Cortex

The cortex, the surface of these lobes, is divided into areas performing different functions. This
isn't exact, of course, and they're highly interconnected and draw information from one another,
but more or less there are small areas of the surface that perform edge detection for visual
information or detect tools as opposed to animate objects in much higher-level areas of the brain.
How these areas are identified is covered in the various brain imaging and
methods hacks earlier in this chapter.

The sensory areas of the cortex are characterized by maps, representations of the information that
comes in from the senses. It's called a map because continous variations in the value of inputs are
represented by continuous shifts in distance between where they are processed in the cortical
space. In the visual cortex, visual space is preserved on the retina. This spatial map is retained for
each stage of early visual processing. This means that if two things are next to each other out
there in the world they will, at least initially, be processed by contiguous areas of the visual
cortex. This is just like when a visual image is stored on photographic negative but unlike when a
visual image is stored in a JPEG image file. You can't automatically point to two adjoining parts
of the JPEG file and be certain that they will appear next to each other in the image. With a
photographic film and with the visual cortex, you can. Similarly, the auditory cortex creates
maps of what you're hearing, but as well as organizing things according to where they appear in
space, it also has maps that use frequency of the sound as the coordinate frame (i.e., they are
tonotopic). And there's an actual map in physical space, on the cortex, of the whole body surface
too, called the sensory homunculus [Hack #12] . You can tell how much importance the brain
gives to areas of the map, comparatively, by looking at how large they are. The middle of the
map of the primary visual cortex corresponds with the fovea in the retina, which is extremely
high resolution. It's as large as the rest of the visual map put together.
When the cortex is discussed, that means the function in question is highly integrated with the
rest of the brain. When we consider what really makes us human and where consciousness is, it
isn't solely the cortex: the rest of the brain has changed function in humans, we have human
bodies and nervous systems, and we exist within environments that our brains reflect in their
adaptations. But it's definitely mostly the cortex. You are here.

Hack 9. The Neuron

There's a veritable electrical storm going on inside your head: 100 billion brain cells firing
electrical signals at one another are responsible for your every thought and action.
A neuron, a.k.a. nerve cell or brain cell, is a specialized cell that sends an electrical impulse out
along fibers connecting it, in turn, to other neurons. These guys are the wires of your very own
personal circuitry.
What follows is a simplistic description of the general features of nerve cells, whether they are
found sending signals from your senses to your brain, from your brain to your muscles, or to and
from other nerve cells. It's this last class, the kind that people most likely mean when they say
"neurons," that we are most interested in here. (All nerve cells, however, share a common basic
Don't for a second think that the general structure we're describing here is the end of
the story. The elegance and complexity of neuron design is staggering, a complex
interplay of structure and noise; of electricity, chemistry, and biology; of spatial and
dynamic interactions that result in the kind of information processing that cannot be
defined using simple rules.1 For just a glimpse at the complexity of neuron structure,
you may want to start with this free chapter on nerve cells from the textbook
Molecular Cell Biology by Harvey Lodish, Arnold Berk, Lawrence S. Zipursky, Paul
Matsudaira, David Baltimore, and James Darnell and published by W. H. Freeman
apter.6074), but any advanced cell biology or neuroscience textbook will do to give
you an idea of what you're missing here.

The neuron is made up of a cell body with long offshootsthese can be very long (the whole
length of the neck, for some neurons in the giraffe, for example) or very short (i.e., reaching only
to the neighboring cell, scant millimeters away). Signals pass only one way along a neuron. The
offshoots receiving incoming transmissions are called dendrites. The outgoing end, which is
typically longer, is called the axon. In most cases there's only one, long, axon, which branches at
the tip as it connects to other neuronsup to 10,000 of them. The junction where the axon of one
cell meets the dendrites of another is called the synapse. Chemicals, called neurotransmitters, are
used to get the signal across the synaptic gap. Each neuron will release only one kind of
neurotransmitter, although it may have receptors for many different kinds. The arrival of the
electric signal at the end of the axon triggers the release of stores of the neurotransmitter that
move across the gap (it's very small, after all) and bind to receptor sites on the other side, places
on the neuron that are tuned to join with this specific type of chemical.
Whereas the signal between neurons uses neurotransmitters, internally it's electrical. The
electrical signal is sent along the neuron in the form of an action potential.2 This is what we

mean when we say impulses, signals, spikes, or refer, in brain imaging speak, to the firing or
lighting up of brain areas (because this is what activity looks like on the pictures that are made).
Action potentials are the fundamental unit of information in the brain, the universal currency of
the neural market.
The two most important computational features are as follows:


They are binary. A neuron either fires or doesn't, and each time it fires, the signal is the
same size (there's more on this later). Binary signals stop the message from becoming
diluted as neurons communicate with one another over distances that are massive
compared to the molecular scale on which they operate.
Neurons encode information in the rate at which they send signals, not in the size of the
signals they send. The signals are always the same size, information encoded in the
frequency at which signals are sent. A stronger signal is indicated by a higher frequency
of spikes, not larger single spikes. This is called rate coding.

Together these two features mean that the real language of the brain is not just a matter of spikes
(signals sent by neurons), but spikes in time.
Whether or not a new spike, or impulse, is generated by the postsynaptic neuron (the one on the
receiving side of the synapse) is affected by the following interwoven factors:

The amount of neurotransmitter released
The interaction with other neurotransmitters released by other neurons
How near they are and how close together in space and time
In what order they release their neurotransmitters

All of this short-term information is affected by any previous history of interaction between these
two neuronstimes one has caused the other to fire and when they have both fired at the same time
for independent reasonsand slightly adjusts the probability of interaction happening again.3
Spikes happen pretty often: up to once every 2 milliseconds at the
maximum rate of the fastest-firing cells (in the auditory system; see Chapter
4 for more on that). Although the average rate of firing is responsive to the
information being represented and transmitted in the brain, the actual timing
of individual spikes is unpredictable. The brain seems to have evolved an
internal communication system that has noise added to only one aspect of
the information it transmitsthe timing, but not the size of the signals
transmitted. Noise is a property of any biological system, so it's not
surprising that it persists even in our most complex organ. It could very well
also be the case that the noise [Hack #33] is playing some useful role in the
information processing the brain does.

After the neurotransmitter has carried (or not carried, as the case may be) the signal across the
synaptic gap, it's then broken down by specialized enzymes and reabsorbed to be released again

when the next signal comes along. Many drugs work by affecting the rate and quantity of
particular neurotransmitters released and the speed at which they are broken down and
Hacks such as [Hack #11] and [Hack #26] show some of the other consequences for psychology
of using neurons to do the work. Two good introductions to how neurons combine on a large
scale can be found at This is a British government
Department of Trade and Industry project that aimed to get neuroscientists and computer
scientists to collaborate in producing reviews of recent advances in their fields and summarize
the implications for the development of artificial cognitive systems.
1.10.1. End Notes

1. Gurney, K. N. (2001). Information processing in dendrites II. Information theoretic
complexity. Neural Networks, 14, 1005-1022.
2. You can start finding out details of the delicate electrochemical dance that allows the
transmission of these binary electrical signals on the pages about action potentials that are
part of a series of lecture notes on human physiology
(, the Neuroscience for Kids site
(, and The Brain from Top to Bottom
3. But this is another storya story called learning.
1.10.2. eS e Also


How neurons are born, develop, and die is another interesting story and one that we're not
covering here. These notes from the National Institutes of Health are a good introduction:
Neurons actually make up less than a tenth of the cells in the brain. The other 90-98%, by
number, are glial cells, which are involved in development and maintenancethe
sysadmins of the brain. Recent research also suggests that they play more of a role in
information processing than was previously thought. You can read about this in the cover
story from the April 2004 edition of Scientific American (volume 290 #4), "The Other
Half of the Brain."

Hack 10. Detect the Effect of Cognitive Function on Cerebral Blood Flow

When you think really hard, your heart rate noticeably increases.
The brain requires approximately 20% of the oxygen in the body, even during times of rest. Like
the other organs in our body, our brain needs more glucose, oxygen, and other essential nutrients
as it takes on more work. Many of the scanning technologies that aim to measure aspects of brain
function take advantage of this. Functional magnetic resonance imaging (fMRI) [Hack #4]
benefits from the fact that oxygenated blood produces slightly different electromagnetic signals
when exposed to strong magnetic fields than deoxygenated blood and that oxygenated blood is
more concentrated in active brain areas. Positron emission tomography (PET) [Hack #3]
involves being injected with weakly radioactive glucose and reading the subsequent signals from
the most active, glucose-hungry areas of the brain.
A technology called transcranial Doppler sonography takes a different approach and measures
blood flow through veins and arteries. It takes advantage of the fact that the pitch of reflected
ultrasound will be altered in proportion to the rate of flow and has been used to measure
moment-to-moment changes in blood supply to the brain. It has been found to be particularly
useful in making comparisons between different mental tasks. However, even without
transcranial Doppler sonography, you can measure the effect of increased brain activity on blood
flow by measuring the pulse.
1.11.1. In Action

For this exercise you will need to get someone to measure your carotid pulse, taken from either
side of the front of the neck, just below the angle of the jaw. It is important that only very light
pressure be useda couple of fingertips pressed lightly to the neck, next to the windpipe, should
enable your friend to feel your pulse with little trouble.
First you need to take a measure of a resting pulse. Sit down and relax for a few minutes. When
you are calm, ask your friend to count your pulse for 60 seconds. During this time, close your
eyes and try to empty your mind.
With a baseline established, ask your friend to measure your pulse for a second time, using
exactly the same method. This time, however, try and think of as many species of animals as you
can. Keeping still and with your eyes closed, think hard, and if you get stuck, try thinking up a
new strategy to give you some more ideas.
During the second session, your pulse rate is likely to increase as your brain requires more
glucose and oxygen to complete its task. Just how much increase you'll see varies from person to

1.11.2. How It Works

Thinking of as many animals as possible is a type of verbal fluency task, testing how easily you
can come up with words. To complete the task successfully, you needed to be able to coordinate
various cognitive skills, for example, searching your memory for category examples, generating
and using strategies to think up more names (perhaps you thought about walking through the
jungle or animals from your local area) and checking you were not repeating yourself.
Neuropsychologists often use this task to test the executive system, the notional system that
allows us to coordinate mental tasks to solve problems and work toward a goal, skills that you
were using to think up examples of animals. After brain injury (particularly to the frontal cortex),
this system can break down, and the verbal fluency task can be one of the tests used to assess the
function of this system.
Research using PET scanning has shown similar verbal fluency tasks use a significant amount of
brain resources and large areas of the cortex, particularly the frontal, temporal, and parietal
Interestingly, in this study people who did best used less blood glucose than people who did not
perform as well. You can examine this relationship yourself by trying the earlier exercise on a
number of people. Do the people who do best show a slightly lower pulse than others? In these
cases, high performers seem to be using their brain more efficiently, rather than simply using
more brain resources.
Although measuring the carotid pulse is a fairly crude measure of brain activity compared to
PET scanning, it is still a good indirect measure of brain activity for this type of high-demand
mental task, as the carotid arteries supply both the middle and anterior cerebral arteries. They
supply blood to most major parts of the cortex, including the frontal, temporal, parietal, and
occipital areas, and so would be important in supplying the needed glucose and oxygen as your
brain kicks into gear.
One problem with PET scanning is that, although it can localize activity to certain brain areas, it
has poor temporal resolution, meaning it is not very good at detecting quick changes in the rate
of blood flow. In contrast, transcranial Doppler sonography can detect differences in blood flow
over very short periods of time (milliseconds). Frauenfelder and colleagues used this technique
to measure blood flow through the middle and anterior cerebral arteries while participants were
completing tasks that are known to need similar cognitive skills as the verbal fluency exercise. 2
They found that the rate of blood flow changed second by second, depending on exactly which
part of the task the participant was tackling. While brain scanning can provide important
information about which areas of the brain are involved in completing a mental activity,
sometimes measuring something as simple as blood flow can fill in the missing pieces.
1.11.3. End Notes

1. Parks, R. W., Loewenstein, D. A., Dodrill, K. L., Barker, W. W., Yoshii, F., Chang, J. Y.,
Emran, A., Apicella, A., Sheramata, W. A., & Duara, R. (1988). Cerebral metabolic

effects of a verbal fluency test: A PET scan study. Journal of Clinical and Experimental
Neuropsychology, 10(5), 565-575.
2. Schuepbach, D., Merlo, M. C., Goenner, F., Staikov, I., Mattle, H. P., Dierks, T., &
Brenner, H. D. (2002). Cerebral hemodynamic response induced by the Tower of Hanoi
puzzle and the Wisconsin card sorting test. Neuropsychologia, 40(1), 39-53.

Hack 11. Why People Don't Work Like Elevator Buttons

More intense signals cause faster reaction times, but there are diminishing returns: as a stimulus
grows in intensity, eventually the reaction speed can't get any better. The formula that relates
intensity and reaction speed is Pieron's Law.
It's a common illusion that if you are in a hurry for the elevator you can make it come quicker by
pressing the button harder. Or more often. Or all the buttons at once. It somehow feels as if it
ought to work, although of course we know it doesn't. Either the elevator has heard you, or it
hasn't. How loud you call doesn't make any difference to how long it'll take to arrive.
But then elevators aren't like people. People do respond quicker to more stimulation, even on the
most fundamental level. We press the brake quicker for brighter stoplights, jump higher at louder
bangs. And it's because we all do this that we all fall so easily into thinking that things, including
elevators, should behave the same way.
1.12.1. In Action

Give someone this simple task: she must sit in front of a screen and press a button as quickly as
she can as soon as she sees a light flash on. If people were like elevators, the time it takes to
press the button wouldn't be affected by the brightness of the light or the number of lights.
But people aren't like elevators and we respond quicker to brighter lights; in fact, the relationship
between the physical intensity of the light and the average speed of response follows a precise
mathematical form. This form is captured by an equation called Pieron's Law. Pieron's Law says
that the time to respond to a stimulus is related to the stimulus intensity by the formula:
Reaction Time
R0 + kI-

Reaction Time is the time between the stimulus appearing and you responding. I is the physical
intensity of the signal. R0 is the minimum time for any response, the asymptotic value

representing all the components of the reaction time that don't vary, such as the time for light to
reach your eye. k and  are constants that vary depending on the exact setup and the particular
person involved. But whatever the setup and whoever the person, graphically the equation looks
like Figure 1-2.

Figure 1-2. How reaction time changes with stimulus intensity

1.12.2. How It Works

In fact, Pieron's Law holds for the brightness of light, the loudness of sound, and even the
strength of taste.1 It says something fundamental about how we process signals and make
decisionsthe physical nature of a stimulus carries through the whole system to affect the nature
of the response. We are not binary systems! The actual number of photons of light or the
amplitude of the sound waves that triggers us to respond influences how we respond. In fact, as
well as affecting response time, the physical intensity of the stimulus also affects response force
as well (e.g., how hard we press the button).
A consequence of the form of Pieron's Law is that increases in speed are easy for low-intensity
stimuli and get harder as the stimulus gains more intensity. It follows a log scale, like a lot of
things in psychophysics. The converse is also true: for quick reaction times, it's easier to slow
people down than to speed them up.
Pieron's Law probably results because of the fundamental way the decisions have to be made
with uncertain information. Although it might be clear to you that the light is either there or not,
that's only because your brain has done the work of removing the uncertainty for you. And on a
neural level, everything is uncertain because neural signals always have noise in them.
So as you wait for light to appear, your neuronal decision-making hardware is inspecting noisy
inputs and trying to decide if there is enough evidence to say "Yes, it's there!" Looking at it like
this, your response time is the time to collect enough neural evidence that something has really
appeared. This is why Pieron's Law applies; more intense stimuli provide more evidence, and the
way in which they provide more evidence results in the equation shown earlier.
To see why, think of it like this: Pieron's Law is a way of saying that the response time improves
but at a decreasing rate, as the intensity (i.e., the rate at which evidence accumulates) increases.
Try this analogy: stimulus intensity is your daily wage and making a response is buying a $900
holiday. If you get paid $10 a day, it'll take 90 days to get the money for the holiday. If you get a
raise of $5, you could afford the holiday in 60 days30 days sooner. If you got two $5 raises,

you'd be able to afford the holiday in 45 daysonly 15 days sooner than how long it would take
with just one $5 raise. The time until you can afford a holiday gets shorter as your wage goes up,
but it gets shorter more slowly, and if you do the math it turns out to be an example of Pieron's
1.12.3. End Note

1. Pins, D., & Bonnet, C. (1996). On the relation between stimulus intensity and processing
time: Pieron's law and choice reaction time. Perception & Psychophysics, 58(3), 390-400.
1.12.4. eS e Also



Stafford, T., & Gurney, K. G. (in press). The role of response mechanisms in determining
reaction time performance: Pieron's law revisited. Psychonomic Bulletin & Review (in
Luce, R. D. (1986). Response Times: Their Role in Inferring Elementary Mental
Organisation. New York: Clarendon Press. An essential one stop for all you need to know
about modeling reaction times.
Pieron, H. (1952). The Sensations: Their Functions, Processes and Mechanisms. London:
Frederick Muller Ltd. The book in which Pieron first proposed his law.

Hack 12. Build Your Own Sensory Homunculus

All abilities are skills; practice something and your brain will devote more resources to it.
The sensory homunculus looks like a person, but swollen and out of all proportion. It has hands
as big as its head; huge eyes, lips, ears, and nose; and skinny arms and legs. What kind of person
is it? It's you, the person in your head. Have a look at the sensory homunculus first, then make
your own.
1.13.1. In Action

You can play around with Jaakko Hakulinen's homunculus applet
(; Java) to see where different bits of the body are
represented in the sensory and motor cortex. There's a screenshot of it in Figure 1-3.
Figure 1-3. The figure shown is scaled according to the relative sizes of the body parts in the motor
and sensory cortex areas; motor is shown on the left, sensory on the right

This is the person inside your head. Each part of the body has been scaled according to how
much of your sensory cortex is devoted to it. The area of cortex responsible for processing touch
sensations is the somatosensory cortex. It lives in the parietal lobe, further toward the back of the
head than the motor cortex, running alongside it from the top of the head down each side of the
brain. Areas for processing neighboring body parts are generally next to each other in the cortex,
although this isn't always possible because of the constraints of mapping the 3D surface of your
skin to a 2D map. The area representing your feet is next to the area representing your genitals,
for example (the genital representation is at the very top of the somatosensory cortex, inside the
groove between the two hemispheres).
The applet lets you compare the motor and sensory maps. The motor map is how body parts are
represented for movement, rather than sensation. Although there are some differences, they're
pretty similar. Using the applet, when you click on a part of the little man, the corresponding part
of the brain above lights up. The half of the man on the left is scaled according to the
representation of the body in the primary motor cortex, and the half on the right is scaled to
represent the somatosensory cortex. If you click on a brain section or body part, you can toggle
shading and the display of the percentage of sensory or motor representation commanded by that
body part. The picture of the man is scaled, too, according to how much cortex each part
corresponds to. That's why the hands are so much larger than the torso.
Having seen this figure, you can see the relative amount of your own somatosensory cortex
devoted to each body part by measuring your touch resolution. To do this, you'll need a willing
friend to help you perform the two-point discrimination test.
Ask your friend to get two pointy objectstwo pencils will doand touch one of your palms with
both of the points, a couple of inches apart. Look away so you can't see him doing it. You'll be
able to tell there are two points there. Now get your friend to touch with only one pencilyou'll be
able to tell you're being touched with just one. The trick now is for him to continue touching
your palm with the pencils, sometimes with both and sometimes with just one, moving the tips
ever closer together each time. At a certain point, you won't be able to tell how many pencils he's
using. In the center of your palm, you should be able to discriminate between two points a
millimeter or so apart. At the base of your thumb, you've a few millimeters of resolution.
Now try the same on your backyour two-point discrimination will be about 4 or 5 centimeters.
To draw a homunculus from these measurements, divide the actual width of your body part by
the two-point discrimination to get the size of each part of the figure.
My back's about 35 centimeters across, so my homunculus should have a
back that's 9 units wide (35 divided by 4 centimeters, approximately). Then
the palms should be 45 units across (my palm is 9 centimeters across; divide
that by 2 millimeters to get 45 units). Calculating in units like this will give
you the correct scalesthe hand in my drawing will be five times as wide as
the back.

That's only two parts of your body. To make a homunculus like the one in Hakulinen's applet (or,
better, the London Natural History Museum's sensory homunculus model:, you'll also need
measurements all over your face, your limbs, your feet, fingers, belly, and the rest. You'll need to
find a fairly close friend for this experiment, I'd imagine.
1.13.2. How It Works

The way the brain deals with different tactile sensations is the way it deals with many different
kinds of input. Within the region of the brain that deals with that kind of input is a surface over
which different values of that input are processeddifferent values correspond to different actual
locations in physical space. In the case of sensations, the body parts are represented in different
parts of the somatosensory cortex: the brain has a somatotopic (body-oriented) map. In hearing,
different tones activate different parts of the auditory cortex: it has a tonotopic map. The same
thing happens in the visual system, with much of the visual cortex being organized in terms of
feature maps comprised of neurons responsible for representing those features, ordered by where
the features are in visual space.
Maps mean that qualities of stimuli can be represented continuously. This becomes important
when you consider that the evidence for each qualityin other words, the rate at which the neurons
in that part of the map are firingis noisy, and it isn't the absolute value of neural firing that is
used to calculate which is the correct value but the relative value. (See [Hack #25] on the motion
aftereffect for an example of this in action.)
The more cells the brain dedicates to building the map representing a sense or motor skill, the
more sensitive we are in discriminating differences in that type of input or in controlling output.
With practice, changes in our representational maps can become permanent.
Brain scanning of musicians has shown that they have larger cortical representations of the body
parts they use to play their instruments in their sensory areasmore neurons devoted to finger
movements among guitarists, more neurons devoted to lips among trombonists. Musicians'
auditory maps of "tone-space" are larger, with neurons more finely tuned to detecting differences
in sounds,1 and orchestra conductors are better at detecting where a sound among a stream of
other sounds is coming from.
It's not surprising that musicians are good at these things, but the neuroimaging evidence shows
that practice alters the very maps our brains use to understand the world. This explains why
small differences are invisible to beginners, but stark to experts. It also offers a hopeful message
to the rest of us: all abilities are skills, if you practice them, your brain will get the message and
devote more resources to them.
1.13.3. End Note

1. Münte, T. F., Altenmüller, E., & Jäncke, L. (2002). The musician's brain as a model for
neuroplasticity. Nature Neuroscience Reviews, 3, 473-478. (This is a review paper rather
than an original research report.)

1.13.4. eS e Also

Pantev, C., Oostenveld, R., Engelien, A., Ross, B., Roberts, L. E., & Hoke, M. (1998).
Increased auditory cortical representation in musicians. Nature, 392, 811-814.
Pleger B., Dinse, H. R., Ragert, P., Schwenkreis, P., Malin, J. P., & Tegenthoff, M.
(2001). Shifts in cortical representations predict human discrimination improvement.
Proceedings of the National Academy of Sciences of the USA, 98, 12255-12260.

Chapter 2. Seeing
Section 2.1. Hacks 13-33
Hack 13. Understand Visual Processing
Hack 14. See the Limits of Your Vision
Hack 15. To See, Act
Hack 16. Map Your Blind Spot
Hack 17. Glimpse the Gaps in Your Vision
Hack 18. When Time Stands Still
Hack 19. Release Eye Fixations for Faster Reactions
Hack 20. Fool Yourself into Seeing 3D
Hack 21. Objects Move, Lighting Shouldn't
Hack 22. Depth Matters
Hack 23. See How Brightness Differs from Luminance: The Checker Shadow Illusion
Hack 24. Create Illusionary Depth with Sunglasses
Hack 25. See Movement When All Is Still
Hack 26. Get Adjusted
Hack 27. Show Motion Without Anything Moving
Hack 28. Motion Extrapolation: The "Flash-Lag Effect"
Hack 29. Turn Gliding Blocks into Stepping Feet
Hack 30. Understand the Rotating Snakes Illusion
Hack 31. Minimize Imaginary Distances
Hack 32. Explore Your Defense Hardware
Hack 33. Neural Noise Isnt a Bug; Its a Feature

2.1. Hacks 13-33

The puzzle that is vision lies in the chasm between the raw sensation gathered by the eyelight
landing on our retinasand our rich perception of color, objects, motion, shape, entire 3D scenes.
In this chapter, we'll fiddle about with some of the ways the brain makes this possible.
We'll start with an overview of the visual system [Hack #13], the limits of your vision [Hack
#14], and the active nature of visual perception [Hack #15] .
There are constraints in vision we usually don't notice, like the blind spot [Hack #16] and the 90
minutes of blindness we experience every day as vision deactivates while our pupils jump around
[Hack #17] . We'll have a look at both these and also at some of the shortcuts and tricks visual
processing uses to make our lives easier: assuming the sun is overhead [Hack #20] and [Hack
#21], jumping out of the way of rapidly expanding dark shapes [Hack #32] (a handy shortcut for
faster processing if you need to dodge quickly), and tricks like the use of noisy neurons [Hack
#33] to extract signal out of visual noise.
Along the way, we'll take in how we perceive depth [Hack #22] and [Hack #24], and motion
[Hack #25] and [Hack #29]. (That's both the correct and false perception of motion, by the way.)
We'll finish off with a little optical illusion called the Rotating Snakes Illusion [Hack #30] that
has all of us fooled. After all, sometimes it's fun to be duped.

Hack 13. Understand Visual Processing

The visual system is a complex network of modules and pathways, all specializing in different
tasks to contribute to our eventual impression of the world.
When we talk about "visual processing," the natural mode of thinking is of a fairly self-contained
process. In this model, the eye would be like a video camera, capturing a sequence of
photographs of whatever the head happens to be looking at at the time and sending these to the
brain to be processed. After "processing" (whatever that might be), the brain would add the
photographs to the rest of the intelligence it has gathered about the world around it and decide
where to turn the head next. And so the routine would begin again. If the brain were a computer,
this neat encapsulation would be how the visual subsystem would probably work.
With that (admittedly, straw man) example in mind, we'll take a tour of vision that shows just
how nonsequential it all really is.
And one need go no further than the very idea of the eyes as passive receptors of photograph-like
images to find the first fault in the straw man. Vision starts with the entire body: we walk around,
and move our eyes and head, to capture depth information [Hack #22] like parallax and more.
Some of these decisions about how to move are made early in visual processing, often before any
object recognition or conscious understanding has come into play.
This pattern of vision as an interactive process, including many feedback loops before processing
has reached conscious perception, is a common one. It's true there's a progression from raw to
processed visual signal, but it's a mixed-up, messy kind of progression. Processing takes time,
and there's a definite incentive for the brain to make use of information as soon as it's been
extracted; there's no time to wait for processing to "complete" before using the extracted
information. All it takes is a rapidly growing dark patch in our visual field to make us flinch
involuntarily [Hack #32], as if something were looming over us. That's an example of an effect
that occurs early in visual processing.
But let's look not at the mechanisms of the early visual system, but how it's used. What are the
endpoints of all this processing? By the time perception reaches consciousness, another world
has been layered on top of it. Instead of seeing colors, shapes, and changes over time (all that's
really available to the eyes), we see whole objects. We see depth, and we have a sense of when
things are moving. Some objects seem to stand out as we pay attention to them, and others
recede into the background. Consciously, we see both the world and assembled result of the
processing the brain has performed, in order to work around constraints (such as the eyes' blind
spot [Hack #16] ), and to give us a head start in reacting with best-guess assumptions. The hacks
in this chapter run the whole production line of visual processing, using visual illusions and
anomalies to point out some detail of how vision works.

But before diving straight into all that, it's useful to have an overview of what's actually meant by
the visual system. We'll start at the eye, see how signals from there go almost directly to the
primary visual cortex on the back of the brain, and from there are distributed in two major
streams. After that, visual information distributes and merges with the general functions of the
cortex itself.
2.2.1. tS r
a t ta the Retina

In a sense, light landing on the retinathe sensory surface at the back of the eyeis already inside
the brain. The whole central nervous system (the brain and spinal column [Hack #7]) is
contained within a number of membranes, the outermost of which is called the dura mater. The
white of your eye, the surface that protects the eye itself, is a continuation of this membrane,
meaning the eye is inside the same sac. It's as if two parts of your brain had decided to bulge out
of your head and become your eyes, but without becoming separate organs.
The retina is a surface of cells at the back of your eye, containing a layer of photoreceptors, cells
that detect light and convert it to electrical signals. For most of the eye, signals are aggregateda
hundred photoreceptors will pass their signal onto a single cell further along in the chain. In the
center of the eye, a place called the fovea, there is no such signal compression. (The population
density of photoreceptors changes considerably across the retina [Hack #14] .) The resolution at
the fovea is as high as it can be, with cells packed in, and the uncompressed signal dispatched,
along with all the other information from other cells, down the optic nerve. The optic nerve is a
bundle of projections from the neurons that sit behind the photoreceptors in the retina, carrying
electrical information toward the brain, the path of information out of the eye. The size of the
optic nerve is such that it creates a hole in our field of vision, as photoreceptors can't sit over the
spot where it quits the eyeball (that's what's referred to as the blind spot [Hack #16] ).
2.2.2. Behind the Eyes

Just behind the eyes, in the middle, the optic nerves from each eye meet, split, and recombine in
a new fashion, at the optic chiasm. Both the right halves of the two retinas are dispatched to the
left of the brain and vice versa (from here on, the two hemispheres of the brain are mirror images
of each other). It seems a little odd to divide processing directly down the center of the visual
field, rather than by eye, but this allows a single side of the brain to compare the same scene as
observed by both eyes, which it needs to get access to depth information.
The route plan now is a dash from the optic chiasm right to the back of the brain, to reach the
visual cortex, which is where the real work starts happening. Along the way, there's a single pit
stop at a small region buried deep within the brain called the lateral geniculate nucleus, or LGN
(there's one of these in each hemisphere, of course).
Already, this is where it gets a little messy. Not every signal that passes
through the optic chiasm goes to the visual cortex. Some go to the superior
colliculus, which is like an emergency visual system. Sitting in the
midbrain, it helps with decisions on head and eye orienting. The midbrain is
an evolutionary, ancient part of the brain, involved with more basic

responses than the cortex and forebrain, which are both better developed in
humans. (See [Hack #7] for a quick tour.) So it looks as if this region is all
low-level functioning. But also, confusingly, the superior colliculus
influences high-level functions, as when it suddenly pushes urgent visual
signals into conscious awareness [Hack #37] .

Actually, the LGN isn't a simple relay station. It deals almost entirely with optical information,
all 1.5 million cells of it. But it also takes input from areas of the brain that deal with what you're
paying attention to, as well as from the cortex in general, and mixes that in too. Before visual
features have as been extracted from the raw visual information, sophisticated input from
elsewhere is being addedwe're not really sure of what's happening here.
There's another division of the visual signal here, too. The LGN has processing pathways for two
separate signals: coarse, low-resolution data (lacking in color) goes into the magnocellular
pathway. High-resolution information goes along the parvocellular pathway. Although there are
many subsequent crossovers, this division remains throughout the visual system.
2.2.3. Enter the Visual Cortex

From the LGN, the signals are sent directly to the visual cortex. At the lower back of the
cerebrum (so about a third of the way up your brain, on the back of your head, and toward the
middle) is an area of the cortex called either the striate or primary visual cortex. It's called
"striate" simply because it contains a dark stripe when closely examined.
Why the stripes? The primary visual cortex is literally six layers of cells, with a thicker and
subdivided layer four where the two different pathways from the LGN land. These projections
from LGN create the dark band that gives the striate cortex its name. As visual information
moves through this region, cells in all six layers play a role in extracting different features. It's
way more complex than the LGNthe striate contains about 200 million cells.
The first batch of processing takes place in a module called V1. V1 holds a map of the retina as
source material, which looks more or less like the area of the eye it's dealing with, only distorted.
The part of the map that represents the foveathe high-resolution center of the eyeis all out of
proportion because of the number of cells dedicated to it. It's as large as the rest of the map put
Physically standing on top of this map are what are called hypercolumns. A hypercolumn is a
stack of cells performing processing that sits on top of an individual location and extracts basic
information. So some neurons will become active when they see a particular color, others when
they see a line segment at a particular angle, and other more complex ones when they see lines at
certain angles moving in particular directions. This first map and its associated hypercolumns
constitute the area V1 (V for "vision"); it performs really simple feature extraction.
The subsequent visual processing areas named V2 and V3 (again, V for "vision," the number just
denotes order), also in the visual cortex, are similar. Information gets bumped from V1 to V2 by

dumping it into V2's own map, which acts as the center for its batch of processing. V3 follows
the same pattern: at the end of each stage, the map is recombined and passed on.
2.2.4. "Whta " na d "Where" r
P ocesi
s ng tS reams

So far visual processing has been mostly linear. There are feedback (the LGN gets information
from elsewhere on the cortex, for example) and crossovers, but mostly the coarse and fine visual
pathways have been processed separately and there's been a reasonably steady progression from
the eye to the primary visual cortex.
From V3, visual information is sent to dozens of areas all over the cortex. These modules send
information to one another and draw from and feed other areas. It stops being a production line
and turns into a big construction site, with many areas extracting and associating different
features, all simultaneously.
There's still a broad distinction between the two pathways though. The coarse visual information,
the magnocellular pathway, flows up to the top of the head. It's called the dorsal stream, or,
more memorably, the "where" stream. From here on, there are modules to spot motion and to
look for broad features.
The fine detail of vision from the parvocellular pathway comes out of the primary visual cortex
and flows down the ventral streamthe "what" stream. The destination for this stream is the
inferior temporal lobe, the underside of the cerebrum, above and behind the eyes.
As the name suggests, the "what" stream is all about object recognition. On the way to the
temporal lobe, there's a stop-off for a little further processing at a unit called the lateral occipital
complex (LOC). What happens here is key to what'll happen at the final destination points of the
"what" stream. The LOC looks for similarity in color and orientation and groups parts of the
visual map together into objects, separating them from the background.
Later on, these objects will be recognized as faces or whatever else. It represents a common
method: the visual information is processed to look for features. When found, information about
those features is added to the pool of data, and the whole lot is sent on.
2.2.5. r
P ocesi
s ng with Built -in Asus mptions

The wiring diagram for all the subsequent motion detection and object recognition modules is
enormously complex. After basic feature extraction, there's still number judgment, following
moving objects, and spotting biological motion [Hack #77] to be done. At a certain point, the