Free Novel Read

I Can Hear You Whisper Page 14


  This seminal work in vision established the idea of “critical periods” in brain development, called sensitive periods today. Synaptic connections in the brain, it seemed, are best created within a certain window of time, but if an area of the brain is deprived of stimulation beyond that sensitive period, it will reorganize and get used for something else.

  • • •

  Naturally, I wondered what happens in deafness. Until the advent of cochlear implants, it wasn’t possible to take away and then restore hearing as Hubel and Wiesel did with vision in cats. Even with earplugs, there is some hearing, as anyone who has tried to nap near a construction site knows. Thanks in large part to Hubel and Wiesel, the visual cortex, located at the back of the brain, remained the best understood part of the brain for years—it still is, really. The secrets of the auditory system, which concentrates in the temporal area, have taken more time to unlock.

  Helen Neville was one of the first neuroscientists to demonstrate how the brain changes in deafness. Neville grew up in Canada and didn’t start out wanting to be a scientist. A product of the 1960s, she wanted to be a revolutionary and bring about social justice. “I was a rabble-rouser. I didn’t want to go to university,” she told me when we met in her office at the University of Oregon. She was in the middle of finishing several important grant applications, so we were pressed for time, and she spoke in rapid-fire sentences; I suspected she might talk the same way on a lazy Sunday afternoon or over a glass of wine. Her passionate, salty personality—four-letter words and phrases like “crazy-ass religion” were sprinkled through her conversation—was evident. “I just wanted to change the world,” she said. She wasn’t enough of a rebel to buck her family’s insistence that she get an education, however, so she stayed in school. Naturally curious, she took a wide range of courses, including one in something called neurophysiology, an exploration of the workings of the brain. “I found out that ideas were chemical cascades in the brain,” she says. “That’s when I had this epiphany. What?! That’s what ideas are? I need to know about that.”

  After college, she was still a rabble-rouser. “I went to Montreal to try to start the revolution. But, you know, I couldn’t start the revolution. Or get a job.” When she found out she could get paid to go to graduate school, she did that instead. “I realized that if I really did think that I could change ideas and ultimately the world, then I had to know how changeable the brain was. Ideas are your brain, your mind is your brain. I needed to know as much as I could about the changeability of your mind and your brain.” She would bring the revolution to the laboratory.

  At the time, “everybody thought the brain was determined and fixed and organized at or before birth,” says Neville. Even Hubel and Wiesel, who had shown such dramatic alterations in the brains of kittens, believed that they had found the limits of change. “Of course it wouldn’t change with experience” is how Neville recounts the thinking. “How could you know the brain if it was changing every time you turned around?” She put her mind to the problem of how to show that the reigning dogma was wrong. Her solution: to look at the brains of deaf people. Says Neville: “They’d had such extremely different experiences that if their brains weren’t different, maybe what all those guys were saying was right.”

  Although it was popularly imagined that deaf people’s sight improved and blind people’s hearing was sharper, scientists had been disappointed in their search for evidence of any actual compensatory changes. Studies of the blind had found that they could not necessarily hear softer sounds than sighted people could. And the deaf had not been shown to be any better at perceiving contrast, seeing in dimmer light, or perceiving subtle motion in slow-moving objects. Neville wondered if her predecessors had been looking in the wrong places, measuring the wrong skills. It was 1983 by the time she started on this work. The technological innovations such as fMRI that would transform neuroscience—and Neville’s own work—were still another decade away. So she used the most advanced technique available: electroencephalography, EEG, a measure of evoked potentials on the scalp. She calls it “eavesdropping on the brain’s electrical conversation.” A potential—the metaphorical significance of the term is appealing—is a measurement of the electrical activity in the brain or roughly how many neurons fire and when. An EEG generates a line that moves up and down representing brain wave activity. Electrodes are glued all over a subject’s head and scalp (today there are complicated net caps with all the electrodes sewn in to make the job easier), and the subject is asked to look or listen. The waveforms that are generated indicate how quickly the brain responded to whatever the subject heard or saw.

  Using a group of subjects who had been deaf from birth, and a group of hearing controls, Neville set up an experiment in which she told the participants to look straight ahead, and then she repeatedly flashed a light off to the side, where it could be seen only in the peripheral vision. Electrical responses to the flashes of light were two to three times higher in deaf brains than in hearing brains. Even more intriguing was the location of the response. It was not primarily over the visual cortex, “which is where any well-behaved brain should be registering flashes of light,” wrote Sharon Begley when she described the experiment in her book, Train Your Mind, Change Your Brain. Instead, the responses were over the auditory cortex, which conventional wisdom said should be sitting dormant in a deaf person who heard no sound.

  Captivated, Neville launched a series of studies trying to pin down what brain functions might have been enhanced in both deaf and blind subjects in examples of what researchers call “compensatory plasticity.” Those studies, which some of her postdoctoral fellows have gone on to pursue in their own laboratories, showed that for the deaf, enhancement came in two areas: peripheral vision and motion processing. In one study, for example, Neville had subjects watch a white square on a computer screen and try to detect which way it was moving. If the square was in the central field of vision, there was no difference between the deaf and the hearing. But in the periphery, the deaf were faster and more accurate than the hearing subjects. “All of the early studies had just studied the center,” says Neville.

  In addition, those previous studies included people who had lost their hearing to meningitis or encephalitis. “They were dealing already with an altered brain,” she says dismissively. To avoid confounding variables like that, Neville tested groups of subjects, all of whom had deaf parents. Some subjects were deaf from birth. Others were their hearing siblings, who learned ASL as their first language but had no auditory deprivation. “You want to know what’s due to auditory deprivation and what’s due to visual and spatial language,” she explains. The brains of those hearing native signers did not show the same changes as their deaf brothers’ and sisters’. “That’s how we sorted it out,” says Neville, beaming. “It’s beautiful, so beautiful!”

  She also wanted to explore in more detail the question of what parts of the brain deaf and blind people were using, a question that got easier to answer with the advent of functional MRI in the early 1990s. While EEGs are useful for indicating the timing of a response, fMRI does a better job of pinpointing its location. MRI, without the “functional” prefix, uses the magnetic field to take pictures inside the body and is used diagnostically by clinicians for all manner of injuries and illnesses. fMRI is different. It produces what is essentially an afterimage of neural activity by showing contrasting areas of blood flow in the brain. By watching how blood moves around the brain—or, more precisely, how the level of oxygen in the blood rises or falls—researchers can tell which areas of the brain are active during particular activities. When Neville looked at the brains of deaf and blind subjects with fMRI, it suddenly looked very much as if structure did not necessarily determine function. She found that signing deaf adults were using parts of the left temporal lobe, usually given over to spoken language in hearing people, for visual processing. They were still using that area of the brain for language, but it was a manual, visual language. In blind subjec
ts, the reverse was true: They had enhanced auditory attention processing in certain areas of the visual cortex.

  For a signing deaf person, capitalizing on unused portions of the brain makes sense. It seems likely that we start out with redundant connections between auditory and visual areas, and that they can be tweaked by experience, but that the overlap gradually decreases. If you want to use a cochlear implant, however, you need to maximize the auditory cortex’s original purpose: hearing. What were the critical periods for the auditory cortex? After what point was it forever reorganized? Or was it? These were the questions Helen Neville proposed and other researchers set out to answer in the late 1990s, once they had enough subjects with cochlear implants to study the issue.

  • • •

  In the Department of Speech, Language and Hearing Sciences at the University of Colorado at Boulder, auditory neuroscientist Anu Sharma studies brain development in children with cochlear implants. It’s work she began at Arizona State University with speech and hearing scientist Michael Dorman, who started working on cochlear implants in the 1980s. In order to assess how well a brain is making use of the sensory input it receives, Sharma focuses on how long it takes the brain to react to sound. The speed of that reaction is a measurement of synaptic development. Sharma, too, uses EEG, looking particularly at what is known as the cortical auditory evoked potential (CAEP for short), the response to sound beyond the brain stem in the auditory cortex.

  Studying the waveform that is generated by the test, Sharma looks for the first big fall and rise of the line: The first valley is known as P1 (first positivity), and the adjacent peak is N1 (first negativity). The upward slope of the line connecting those two points roughly indicates the speed at which two parts of the brain—primary cortex and higher order cortex—are communicating. Sitting in her office in Boulder with a view of the Rocky Mountains behind her, Sharma demonstrates by holding out her fist to represent the primary cortex, then wrapping her other hand over the top of her fist to show the higher order cortex (the outer layer of the brain). “They need to connect; they need to talk,” she says.

  In typically developing hearing children, that connection in the brain starts slowly and then speeds up. Latency is the time it takes in milliseconds for the brain to react to a sound. In a newborn the latency of the P1 response might be three hundred milliseconds; in a three-year-old, it has dropped to one hundred twenty-five milliseconds. As a child grows, that time continues to shorten. With age and experience, the brain becomes more efficient. By adulthood, P1 latency is down to sixty milliseconds.

  In children with cochlear implants, Sharma found that being implanted early makes all the difference in terms of how much help the child will get from the implant. The peak of synaptic activity in the auditory cortex is when a child is three and a half, says Sharma. Experience prunes and refines synapses allowing learning to occur. Those that are getting used stay; those that are not are pruned away. “Hearing helps to prune them,” she explains. “If a child gets the implant under the age of three and a half, that part of the brain looks quite similar to that of a normal hearing child. If they wait until they are seven or older [or had been deaf for more than seven years], the hearing part of the brain of the implanted child never looks the same.” Between the ages of three and seven, the studies showed a range of responses, but earlier was almost always better. Sharma and her colleagues had identified sensitive periods for hearing and brain development.

  “We looked more closely and found that what happens after the sensitive period closes is the brain gets reorganized,” she says. “That’s important real estate. If sound is not going in, it’s not going to sit there waiting forever.” In some of the secondary areas, for instance, vision and touch take over. “That’s how the brain changes in deafness.”

  No wonder Dr. Parisier was in a hurry.

  12

  CRITICAL BANDWIDTHS

  On a sunny spring day in 1966, Graeme Clark, a young Australian ear, nose, and throat surgeon, had a little extra time for lunch and decided to eat outside on a park bench. He was shuttling between jobs at various hospitals in Melbourne, so he carried a lot with him, including a backlog of scientific and medical journals. He pulled one out and found a report by Blair Simmons on achieving some hearing sensation, though no speech, from the six-channel device they had implanted in Anthony Vierra. Just like that, Clark knew what he was going to do with the rest of his life. “That lit that research fire in the belly,” he says. “It all became clear.”

  We were sitting around a table at the Royal Victorian Eye and Ear Hospital in Melbourne. For some thirty-five years until his retirement in 2004, this had been Clark’s office. It was here, about as far from the medical centers of Europe and America as it is physically possible to be, that Clark assembled a team of young researchers to build a viable multichannel cochlear implant. Research groups at the University of California, San Francisco (UCSF), and at the University of Utah as well as in Paris and Vienna were pursuing the same goal. (Bill House was still using his single-channel device and Blair Simmons, who died in 1998, got less involved over time.) At the start, the idea that Clark’s group could achieve such an ambitious goal was almost laughable. They had no money, little prestige, and, at times, no patients. Today, many of those same audiologists, engineers, and scientists are still in Melbourne, running the various spin-off clinical and research organizations that grew out of Clark’s efforts.

  A thoughtful, deeply religious man, Clark shares the formality of his generation. In a country where men have been wearing shorts to work for decades, he is often in a jacket and tie, as he was when we met at the hospital. Earlier in his career, at a dinner in his honor, a colleague joked that he had wanted to entertain the crowd with tales of Clark’s youthful indiscretions but hadn’t been able to find any. From a young age, Graeme Clark was apparently remarkably single-minded.

  He was born in 1935 in Camden, then a small country town some forty miles from Sydney. His father, Colin, was a pharmacist and ran a chemist shop in town. Around the age of twenty, Colin Clark began to lose his hearing, and it got progressively worse throughout the rest of his life. There could be uncomfortable consequences from the combination of hearing loss and a pharmacy. “People had to speak up to say what they wanted,” remembers Graeme Clark. “It was embarrassing when someone came in to ask for women’s products or men’s products, contraceptives or the like.” As a boy of about ten helping out in the shop, Graeme would have to find the requested item. “I didn’t know much about the facts of life, but I knew where [things were in the shop],” he says with a laugh. Later in life, Clark asked his father about his experience of hearing loss. “He said it was terrible,” Clark remembers. “It was a struggle to hear people at work and social environments with background noise. He would be tired out through having to listen so hard and make conversation.” Clark’s childhood sense of lost opportunity was strong. “[My father] was quite a sociable person. He could have been president of the [local social] club, but he couldn’t manage those things. He’d sit in the corner sometimes of the lounge room when we’d have visitors, and they would think he was dumb.”

  Life in his father’s chemist shop exposed Clark not just to deafness but also to medicine. He got to know the local doctors, and the idea of helping people appealed to him. As early as kindergarten, he told his teacher he wanted “to fix ears” when he grew up. He repeated that to the family’s minister when he was ten. By sixteen, he was studying medicine at the University of Sydney. By twenty-nine, he was a consultant surgeon, still so boyish in appearance a nurse once refused to let him in to visit his own patient.

  But medicine alone wasn’t enough. Clark always nursed an interest in research, believing that a man with both clinical and laboratory experience could be most effective and most aware of what was needed. Blair Simmons’s report gave Clark the research direction he’d been missing, one that aligned perfectly with his childhood mission of “fixing ears.” He knew Bill House had alread
y operated on a few patients as well, but Clark hewed closer to Simmons’s methodical approach. In order to see “what the science would show,” he left his flourishing surgical practice to pursue a PhD in auditory physiology, studying electrical stimulation in the auditory system of cats. “It was a complete gamble,” he says. “Ninety-nine percent of people said it wouldn’t work.” Clark and his wife, Margaret, had two children at the time and would go on to have three more. As a doctoral student, he made so much less than he had as a surgeon that when their old car broke down, they couldn’t replace it. “But those were some of the happiest times in our lives because I wasn’t in the rat race,” he remembers. “And because research was exciting.”

  With his doctorate in hand, Clark got lucky. A position as chair of the Department of Otolaryngology opened up at the University of Melbourne and the Royal Victorian Eye and Ear Hospital. As Clark worked to assemble a team there, he was raising money with one hand—literally shaking a can on street corners at times as part of fund-raising efforts—and doling out the cash carefully to his young staff with the other. Everyone worked from one three-month contract to the next. “I was young and they were younger,” he says. Like Simmons, Clark believed he would need a multidisciplinary team to provide expertise on the considerable variety of jobs required to create this piece of technology. Over the years, he employed engineers, animal researchers, audiologists, computer programmers, speech scientists, and surgeons.