I Can Hear You Whisper Page 24
“Here’s a case where a very specific part of the brain was damaged,” explains Poeppel. “He changed from a churchgoing, very Republicanesque kind of guy to essentially a frat boy: a capricious, hypersexual person. Why is that result important? Because it’s about functional localization. It really said when you do something to the brain, you affect the mind in a particular way, not in an all-out way. And that remains one of the main things in neuroscience and neuropsychology. People want to know where stuff is.”
It was a few years after Gage’s accident that Pierre Paul Broca, a French physician who wanted to know where stuff was, or more precisely whether it mattered, met his own famous case study. The patient, whose name was Monsieur Leborgne but who was known as Tan, had progressively lost the ability to speak until the only word he could say was “tan.” Yet he could still comprehend language. Tan died shortly after Broca first examined him. After performing an autopsy and discovering a lesion on the left side of the brain, Broca concluded that the affected area was responsible for speech production, thereby becoming the first scientist to clearly label the function of a particular part of the brain. A few years later, German physician Carl Wernicke found a lesion in a different area when he studied the brain of a man who could speak but made no sense. The conclusion was that this man couldn’t comprehend the world because of his lesion and therefore that that area governed speech perception. From these two patients—one with an output disorder, the other with an input disorder—a model was born.
“The idea is intuitively very pleasing,” says Poeppel. “What do we know about communication? We say things and we hear things. So there must be a production chunk of the brain and a comprehension chunk of the brain.”
Poeppel turns to his computer screen.
“Here’s vision.”
He pulls up a map of a brain’s visual system—actually a macaque monkey’s visual system, which is very similar to that of a human. Multicolored, multilayered, a jumble of boxes and interconnecting lines, the map is nearly as complex as a wiring diagram for a silicon chip.
“Here’s hearing.”
Up pops a map of the auditory system. A little less complicated, it nonetheless has no fewer than a dozen stages and several layers and calls to mind a map of the New York City subway.
“And here’s language.”
Up comes the familiar image, the same one that was in so many of the offending textbooks he has pulled off the shelf, showing the left hemisphere of the brain with a circle toward the front marked as Broca’s area and a circle toward the back marked as Wernicke’s area, with some shading in between and above.
“Really?” says Poeppel. “You think language is reducible to just production and perception. That seems wildly optimistic. It’s not plausible.”
The old model persisted because those who thought about neurology didn’t usually talk to those who thought about the nooks and crannies of language and how it worked. A reason to have that conversation—to go to your colleague two buildings down, as Poeppel puts it—is that perhaps we can look to the brain to learn something about how language works, but it’s also possible that we can use language to learn something about how the brain works.
Poeppel’s goal is to create “an inventory of the mind.” His approach to studying language is summed up in a photograph he likes to show at conferences of a dismantled car with all its components neatly lined up on the floor. “What we really have to do is think like a bunch of guys in a garage,” he says. “Our job is to take the thing apart and figure out: What are the parts?”
He began by coming up with a new model. Poeppel and his colleague Greg Hickok folded together all that had been learned in the previous hundred years through old-fashioned studies of deficits and lesions and through newfangled imaging research. “I say to you ‘cat.’ What happens?” asks Poeppel. “We now know… . You begin by analyzing the sound you heard, then you extract from that something about speech attributes, you translate it into a kind of code, you recognize the word by looking it up in the dictionary in your head, you do a little brain networking and put things together, you say the word.”
First published in 2000 and then refined, their model applies an existing idea about brain organization to language. It is now widely accepted that in both vision and hearing, the brain divides the labor required to make sense of incoming information into two streams that flow through separate networks. Because those networks flow along the back and belly of the brain, they are called dorsal and ventral streams. Imagine routing the electrical current in a house through both the basement and the attic, with one circuit powering the lights and the other the appliances. In vision, the lower, ventral stream handles the details of shape and color necessary for object recognition and is therefore known as the “what” pathway. The upper, dorsal stream is the “where/how” pathway, which helps us find objects in space and guides our movement. In hearing, there’s less agreement on the role of the two streams, but one argument is that they concern identifying sounds versus locating them.
Hickok and Poeppel’s model suggests that this same basic dual-stream principle also governs neuronal responses for language and may even be a basic rule of brain physiology. “There are two things the brain needs to do to represent speech information in the brain,” Hickok explains. “It has to understand the words it’s hearing and has to be able to reproduce them. Those are two different computational paths.” In their view, the ventral stream is where sound is mapped onto meaning. The dorsal stream, running along the top of the brain, handles articulation or motor processes. Graphically, they represent their model as seven boxes connected by arrows moving in both directions, since information feeds forward and backward. Labeling each box a network or interface—articulatory, sensorimotor, phonological, sound analysis, lexical, combinatorial, and conceptual—emphasizes the interconnectedness of the processing, the fact that these are circuits we’re talking about, not single locations with single functions. Some change jobs over time. The spot where sensory information combines with motor processing helps children learn language but is also the area where adults maintain the ability to learn new vocabulary. That same area requires constant stimulation to do its job: It’s not only children who use hearing to develop language; adults who lose their hearing eventually suffer a decline in the clarity of their speech if they do not use hearing aids or cochlear implants.
The model also challenges the conventional belief that language processing is concentrated on the left side of the brain. “You start reading and thinking about this more and you think: Can that be right?” says Poeppel. When he and Hickok began looking at images from PET and fMRI, they saw a very different pattern from lesion data alone. “The textbook is telling me I’m supposed to find this blob over here, but every time I look I find the blobs on both sides. What’s up with that?” says Poeppel. “Now it’s actually the textbook model that the comprehension system is absolutely bilateral. It’s the production system that seems more lateralized. You have to look more fine-grained at which computations, which operations, what exactly about the language system is lateralized.”
It was a bold step for Poeppel and Hickok to take on such ingrained thinking. Although it initially came in for attack, their model has gained stature as pretty much the best idea going. “At least it’s the most cited,” says Poeppel. “It may be wrong… . No, it’s absolutely for sure wrong, because how could it possibly not be way more complicated than a bunch of colored boxes?” But at the moment, the model has achieved the enviable position of being the one up-and-coming neuroscientists learn and then try to take apart. I even found it in a new textbook.
• • •
Notice that this model does not just locate “language,” which, despite how we often talk about it, is not one monolithic thing. Nor does it limit itself to perception and production. Instead, the model’s organization parallels the organization of language, which consists of elements like sound, meaning, and grammar that work together to create wha
t we know as English or Chinese. Each of those linguistic tasks, or subsystems, turns out to involve a different network of neurons.
Phonology is the sound structure of language, though ASL is now thought to have a phonology, too, one that consists of handshapes, movements, and orientation. “During the first year of life, what you acquire is your phonetic repertoire,” says Poeppel, referring to the forty phonemes often identified in English. “If I’m English, I have this many vowels; if I’m Swedish, I have nearly twice as many; that is my inventory to work with—no more, no less. That’s purely experience-dependent, because you don’t know what language you’re going to grow up with.” One thing Poeppel and Hickok observed was that the ability to comprehend words and the ability to perform more basic tasks like identifying phonemes or recognizing rhyming patterns seemed to be separate. This was an important observation because it meant that phonemic discrimination might not be required to understand the meaning of words. Their model puts the phonological network roughly where Wernicke’s area is, in the posterior portion of the superior temporal gyrus. As for Broca’s area, which is part of the inferior frontal gyrus, they and many others now think of it as a region that handles as many as twelve different language-related processes.
Morphology refers to the fact that words have internal structure—that “unbelievable” is made up of “un-” and “believe” and “-able,” and maybe “believe” has some parts as well. It turns out there’s a heated academic debate between psychologists and linguists about whether morphology even exists—“excruciatingly boring for nonexperts, but interesting for those of us who nerd out about these things,” says Poeppel. Morphology is likely processed, says Poeppel, “in the interplay between the temporal and inferior frontal regions.”
The same is true of syntax, the grammar of language—structure at the level of a sentence. Here there is no argument: You have to have sentence structure for language to make sense. It’s what tells us who did what to whom, whether man bites dog or dog bites man.
Semantics concerns the meaning of words, individually and in context in a sentence. They are not the same. Even if you know the different meanings of “easy” and “eager,” the brain has to do some extra work to make sense of the difference between the following two sentences where the structure is seemingly identical:
John is easy to please.
John is eager to please.
They differ by exactly one syllable, by one tiny sound sequence, but they mean something entirely different for John. He’s either the pleaser or the pleasee. A fluent user of language understands that effortlessly. Lexical ability, too, is located in the ventral stream, specifically in the posterior middle temporal gyrus and the posterior inferior temporal sulcus.
Prosody is the rhythm of speech, the contour of our intonations, and to Poeppel its importance is clear. “How is it that you can distinguish subtle inflections at the end of the sentence and know that it’s a question?” says Poeppel. “Syntax doesn’t tell you that. This small change in acoustics changes the interpretation completely.” It is also a cue for prominence, according to Janet Werker, a developmental psychologist at the University of British Columbia and colleague of Athena Vouloumanos. In seven-month-old bilingual babies, Werker found that the infants were using prosody (pitch, duration, and loudness) to solve the challenging task of learning both English and Japanese, languages that do not follow the same word order (“eat an apple” in English versus “apple eat” in Japanese). Babies cleverly listen for where the stress is placed in a sentence. Prosody, it seems, plays a part in how you break the sound stream into usable units. That process begins in utero, where a baby can’t hear all the specific sounds of his mother’s voice but can follow the rhythm of her language. Its location in the brain is similar to that of phonology.
Discourse refers to units of language larger than a sentence. A favorite of the deconstructionists of the 1980s, it’s a relatively new field of study that looks at writing, conversation, etc. It is discourse, I realize, that is the aim of a literate, educated person. “The assumption is that you get that for free,” says Poeppel, meaning that if you learn all the more basic parts of language, you will achieve discourse as well. I’m not so sure about that, since one of the emerging areas of concern for cochlear implanted children is how their language develops beyond elementary school, when it needs to take on sophistication and subtlety as they read to learn and begin to write essays. In the brain, discourse would fall into what Poeppel and Hickok call the widely distributed “conceptual network,” meaning we bring to bear much of our neural resources to tackle it.
For all of these language systems, the scientific focus has been on mapping, on figuring out “where stuff is.” But that is not enough, says Poeppel. “We have more blobs, better blobs. But our yearning should be bigger. Namely, what’s an explanation for what’s going on there? What is the mechanism? A mechanism is some kind of account or explanation for how a set of elements interacts to generate something. That’s not what we have.”
In the search for a mechanism, linguists and neuroscientists are asking if there is a hierarchy to these subsystems of language. They are searching for what Poeppel calls the “primitives,” the primary colors of language with which everything else is constructed. For my purposes, thinking about language, and thinking about Alex, in terms of this new list of parts, I see how each of them contributes to the next. Phonology is an obvious problem for a child who doesn’t hear well, but morphology figured in many of Alex’s progress reports. If he couldn’t hear or say parts of words that had meaning, like the “-ed” on a past-tense verb or the “s” of a plural, he wouldn’t always understand or be understood. Semantics showed up in the constant effort to expand his vocabulary. Prosody had previously been drowned out for me by the drumbeat of concern over perception and production, but I could see how helpful it is because of how integral it is to the ability to make sense of the stream of language. “One of the absolutely elemental things you have to do for comprehending language is you have to segment it,” says Poeppel. “If you can’t segment it, you can’t actually look up the words at all. If you can’t look up the words, your syntax isn’t going to be all that great, either.”
“So how can we use this information to help a child?” I want to know, even though I recognize that Poeppel spends his time in the lab, not the classroom.
“The syllable has practical ramifications,” he says. “From the get-go, what we would want to give a kid are these cues in the signal. The syllable is a segmentation cue that provides rhythmic information and that’s easy to remember. That’s super-useful. The information comes prepackaged.” And the brain apparently makes use of that fact. Various studies have shown that the brain gives a bit of extra effort to syllables that occur at the beginning of words, something researchers call the “word onset effect.” Poeppel has also shown that the brain resets itself to track the syllable rate of a speaker.
How can we help children train that ability and lay down those circuits? One way is to let them explore the rhythms of language. Happily, that is something that comes naturally or that was instinctually built in to many children’s stories and songs. Usha Goswami, an educational neuroscientist at Cambridge University, studies whether dyslexia may be related to difficulties in sound processing. Her research suggests that a return to some old-fashioned wordplay would be useful long before children learn to read. “Nursery rhymes are perfect little metrical poems,” she says. “We know children love them, and they enjoy things like singing in time to music.” Engaging in such activities seems to help develop the necessary phonological and prosodic networks. Her laboratory is also studying the benefits of having five-year-olds learn poetry by heart.
Goswami even got me thinking about the neurobiology of The Cat in the Hat. “Repetition matters, too,” she says. “That’s true of all learning by the brain, the more repetition the better. Dr. Seuss hasn’t just got rhyme, he’s also got repeating phrases. The child who’s good with syl
lables will be quicker to get onset rhymes and will be quicker to acquire phonemes once they acquire abstract units like letters. That’s why developmentally you want to start with amplifying the syllable level by bringing in all this rhyme. That should put the child in a better position to begin reading.”
As soon as I got home, of course, I read to Alex about the cat who came to play on that cold, cold, wet day.
20
A ROAD MAP OF PLASTICITY
In one corner of a classroom at a Head Start facility in Eugene, Oregon, three preschool children sit at a small table trying to color a frog green without going outside of the lines. It’s not easy for them and they have to concentrate. In the opposite corner, another group of children are playing with balloons, batting them back and forth and trying to keep them aloft. These three, with the balloons, have been assigned the role of Dr. Distractor, and their job is to try to get the children at the table to stop paying attention to their work. Over the next eight weeks, every Tuesday night, the two groups will swap roles regularly, and whoever is in the distracting group will move closer and closer to the table until, by the last session, they are practically on top of those doing the drawing. To maintain concentration, the poor four-year-olds at the table have to marshal all their resources, looking only at their papers, thinking hard about their work. At one point, a little girl raises her hand to physically hide the other children and their balloons from her view.
The entire process is an exercise in applied neuroplasticity and it represents a very modern way of thinking about how to help children.
The old way can be summed up in a story that Mike Merzenich, who made his name studying plasticity, told me. In the 1960s, around the time that David Hubel and Torsten Wiesel were conducting the research on kittens that would establish the concept of the critical period, one of Merzenich’s relatives, who taught elementary school in Wisconsin, was honored as the national teacher of the year. The family gathered to celebrate her accomplishment and she gave a small speech. “She stands up and says, ‘My secret was that I figured out, on the basis of testing, what children were really worth my attention and I gave them everything.’” Merzenich was so bothered by that statement that he’s never forgotten it. “What do you think the kids that weren’t worth her attention got?” he asks. “Nothing.”