- Home
- Lydia Denworth
I Can Hear You Whisper Page 16
I Can Hear You Whisper Read online
Page 16
In another study, one of Merzenich’s collaborators, William Jenkins, taught owl monkeys to reach through their cages and run their fingers along a grooved spinning disk with just the right amount of pressure to keep the disk spinning and their fingers gliding along it. When Jenkins and Merzenich studied the maps of those monkeys’ brains, they found that even a task such as running a finger along a disk—much less dramatic than a severed nerve—had altered the map. The brain area responding to that particular finger was now four times larger than it had been.
Finally, Merzenich understood what he was seeing: that the brain is capable of changing with experience throughout life. It was a finding that changed his career. It also brought considerable controversy. “Initially, the mainstream saw the arguments we made for adult plasticity in monkeys as almost certainly wrong,” he recalls. Major scientists like Torsten Wiesel, whose Nobel Prize had been awarded in part for establishing the critical period, was one of those who said outright that the findings could not be true. It was years before the work was widely accepted.
Right away, however, the work on plasticity prodded Merzenich’s thinking about the development of cochlear implants. Robin Michelson had been persistent. “For a year, this very nice man pestered me and pestered me and pestered me,” remembers Merzenich. “He would come to see me at least once or twice a week and tell me more about the wondrous things he’d seen in his patients and bug me to come help him.” For a year, Merzenich resisted. “Finally, in part to get him off my back, I said okay.” Merzenich agreed to do some basic psychophysical experiments—create some stimuli and apply them—and see what Michelson’s best patient could really hear.
The patient was a woman named Ellen Bories from California’s Central Valley. “A great lady,” says Merzenich fondly. “We started trying to define what she could and couldn’t hear with her cochlear implant, and I was blown away by what she could hear with this bullshit device.” Michelson’s implant, at that point, was “a railroad track electrode,” as Merzenich had taken to calling it dismissively, that delivered only a single channel of information. While Bories could not recognize speech, she could do some impressive things. In the low-frequency range, up to about 1,000 Hz, she could distinguish between the pitches of different vowels—she could hear and repeat “ah, ee, ii, oh, uh.” She could tell whether a speaking voice was male or female. When they played her some music, says Merzenich, “she could actually identify whether it was an oboe or a bassoon. I was just amazed what she could get with one channel.”
Not surprisingly, he now saw the effort in a whole new light. For one thing, “I realized how much it could mean even to have these simple, relatively crude devices,” he says. “She found it very helpful for lipreading and basic communication and was tickled pink about having this.” Secondly, and significantly, Merzenich saw a way forward. At Bell Labs, researchers had never stopped working on the combination of sound and electricity. In the 1960s, they had explored the limits of minimizing the acoustic signal. “They took the complex signal of voice and reduced it and reduced it and reduced it,” explains Merzenich. “They said, How simply can I represent the different sound components and still represent speech as intelligible?”
Even earlier, in the 1940s, Harvey Fletcher had introduced the concept of critical bandwidth, a way of describing and putting boundaries on the natural filtering performed by the cochlea. Although there are thousands of frequencies, if those that are close together sound at the same time, one tone can “mask” the other or interfere with its perception, like a blind spot in a rearview mirror. Roughly speaking, critical bandwidth defined the areas within which such masking occurred. How many separate critical bandwidths were required for intelligible speech? The answer was eleven, and Merzenich realized it could apply to cochlear implants, too. The implanted electrodes would be exciting individual sites, each of which represented specific sounds. “The question was how many sites would I need to excite to make that speech perfectly intelligible. Eleven is sort of the magical number.”
He was able to simplify this a little further, because Bell Labs had also built a voice coder (or vocoder) in which you allowed the base band, the lowest-frequency bandwidth, to do more than its fair share of the work and represent information up to about 1,000 Hz. With this one wider channel doing the heavy lifting on the bottom, Merzenich calculated that he needed a minimum of five narrower bands above it, for a total of six channels. Modern cochlear implants still use only up to twenty-two channels—really a child’s toy piano compared to natural hearing, which has the range of a Steinway grand. As Merzenich aptly puts it, listening through a cochlear implant is “like playing Chopin with your fist.” It was not going to match the elegance that comes from natural hearing; “it’s another class of thing,” he says. But on the other hand, he was beginning to understand that perhaps it didn’t need to be the same.
It didn’t need to be the same, that was, if the adult brain was capable of change. Merzenich’s cochlear implant work had begun to synthesize with his studies of brain plasticity. “It occurred to me about this time, when we began to see these patients that seemed to understand all kinds of stuff in their cochlear implant, that the recovery in speech understanding was a real challenge to how we thought about how the brain represented information to start with,” he says.
• • •
Unlike Merzenich, Donald Eddington was far from an established scientist when he got involved with cochlear implants at the University of Utah. As an undergraduate studying electrical engineering, he had a job taking care of the monkey colony in an artificial organ laboratory founded by pioneering researchers William Dobelle and Willem Kolff. Since he was hanging around the lab, Eddington began to help with computer programming and decided to do his graduate work there. Dobelle and Kolff were collaborating with the House Ear Institute in Los Angeles. While Bill House was continuing to work on single-channel implants, another Institute surgeon, Derald Brackmann, wanted to develop multichannel cochlear implants. So that’s what Don Eddington did for his PhD dissertation.
On the question of how to arrange the three basic elements—microphone, electronics, and electrodes—needed to make an implant work like an artificial cochlea, the groups in Australia and San Francisco had arrived at similar solutions. They implanted the electrodes, kept the microphone outside the head, and divided the necessary electronics in two. The external part of the package included the speech processor. Internally, there was a small electronic unit that served as a relay station, receiving the signal from outside by radio transmission through the skin and then generating an electrical signal to pass along the electrodes to the auditory nerve.
In Utah, they did it differently. Eddington’s device, ultimately known as the Ineraid, implanted only the electrodes and brought them out to a dime-size plug behind the ear, just as Bill House’s prototypes had done. The downside to this approach was that most people found the visible plug rather Frankenstein-like, and the subjects who worked with Eddington could hear only when they were plugged into the computer in the laboratory. From a scientific point of view, however, the strategy had much to recommend it, because it gave the researchers complete control and flexibility. “The wire lead came out and [attached] to a connector that poked through the skin,” says Eddington, who moved to Massachusetts Institute of Technology in the 1980s. “You could measure any of the electrode characteristics, and you could send any signal you wanted to the electrode.” Rather than extract specific cues from the speech in the processor, as the Australians had, Eddington and his team split the speech signal according to frequency and divided it—all of it—across six channels arranged from low to high.
• • •
Over the next dozen or so years, these various researchers—sometimes in collaboration but mostly in competition—worked to overcome all the fundamental issues of electronics, biocompatibility, tissue tolerance, and speech processing that had to be solved before a cochlear implant could reliably go on the market. �
�To make a long story short, we solved all of these problems,” Merzenich says. “We resolved issues of safety, we saw that we could implant things, we resolved issues of how we could excite the inner ear locally, we created electrode arrays and so forth. Pretty much in parallel, we all created these models by which we could control stimulation in patterned forms that would lead to the representation of speech.”
As researchers began to see success toward the end of the 1970s and into the 1980s, Merzenich was struck by the fact that initially at least, the coding strategies for each group were very different. “There was no way in hell you could say that they were delivering information in the same form to the brain,” he says. “And yet, people basically were resolving it. How the hell can you account for that? I began to understand that the brain was in there, that there was a miracle in play here. Cochlear implants weren’t working so well because the engineering was so fabulous, although it’s good—it’s still the most impressive device implanted in humans in an engineering sense probably. But fundamentally, it worked so marvelously because God or Mother Nature did it, the brain did it, not because the engineers did it.”
Of course, the engineers like to take credit. “Success has many fathers,” Merzenich jokes. “Failure is an orphan.” Indeed, there are quite a few people who can claim—and do—the “invention” of the cochlear implant. The group in Vienna, led by the husband and wife team of Ingeborg and Erwin Hochmair, successfully implanted the first multichannel implant the year before Graeme Clark, although they then pursued a single-channel device for a time. Most agree the modern cochlear implant represents a joint effort.
In the 1980s, corporations got involved, providing necessary cash and the ability to manufacture workable devices at scale. After the Food and Drug Administration approved Bill House’s single electrode device for use in adults in 1984, the Australian cochlear implant, by then manufactured by Cochlear Corporation, followed a year later, again with FDA approval only for adults. The San Francisco device was sold to Advanced Bionics, and went on the market a few years later. The Utah device was sold to a venture capital firm but was never made available commercially. The plug poking through the skin, acknowledges Eddington, was “a huge barrier to overcome,” and the company never did the work necessary to make a fully implantable device. The third cochlear implant available today is sold by a European company called MED-EL, an outgrowth of the Hochmairs’ research group.
The inventors had proved the principle. After it okayed the first implant, the FDA noted that “for the first time a device can, to a degree, replace an organ of the human senses.” Many shared Mike Merzenich’s view that the achievement was nearly miraculous.
But there was a group who did not. It was one thing to provide a signal for adults who had once had hearing. A child born deaf was different—scientifically, ethically, and perhaps even culturally. In 1990, the FDA approved the Australian implant for use in children as young as two, and a new battle began. If the inventors of cochlear implants thought they had a fight on their hands getting this far within the scientific community, it was nothing compared to what awaited in the Deaf community when attention turned to implanting children. The very group that all the surgeons and scientists had intended to benefit—the deaf—now began to argue that the cochlear implant was not a miracle but a menace.
13
SURGERY
In the predawn darkness of a December morning, I watched Alex sleeping in his crib for a moment and gently ran my fingers along the side of his sweet head just above his ear. In a few hours, that spot would be forever changed by a piece of hardware. Glancing out the window, I saw that Mark had the car waiting, so I scooped Alex up, wrapped him in a blanket against the cold, and carried him outside.
The testing was done. Our decision had been made and it had not been a difficult one for us. As I once told a cochlear implant surgeon, “You had me at hello.” Everything about Alex suggested he had a good chance at success. The surgery had been scheduled quickly, just a week or so after our meeting with Dr. Parisier. There was nothing left to do but drive to the hospital down the hushed city streets, where the holiday lights and a blow-up snowman on the corner struck an incongruously cheerful note. Alex pointed to the snowman and smiled sleepily.
Although I knew the risks, surgery didn’t scare me unduly. In my family, doctors had always been the good guys. After a traumatic brain injury, my brother’s life was saved by a very talented neurosurgeon. Another neurosurgeon had released my mother from years of pain caused by a facial nerve disorder called trigeminal neuralgia. There’d been other incidents—serious and less serious—all of them adding up to a view of medicine as a force for good.
That didn’t make it easy to surrender my child to the team of green-gowned strangers waiting under the cold fluorescent light of the operating room. I was allowed to hold him until he was unconscious. As if he could make himself disappear, Alex buried his small face in my chest and clung to me like one of those clip-on koala toys I had as a child. I had to pry him off and cradle him so that they could put the anesthesia mask on him. When he was out, I was escorted back to where Mark waited in an anteroom. For a time, the two of us sat there, dressed in surgical gowns, booties, and caps, tensely holding hands.
“You’ll have to wait upstairs,” said a nurse finally, kindly but firmly. “It’ll be a few hours.”
Though I wasn’t there, I know generally how the surgery went. After a little bit of Alex’s brown hair was shaved away, Dr. Parisier made an incision like an upside-down question mark around his right ear, then peeled back the skin. He drilled a hole in the mastoid bone to make a seat for the receiver/stimulator (the internal electronics of the implant) and to be able to reach the inner ear. We had chosen the Australian company, Cochlear, that grew out of Graeme Clark’s work. Parisier threaded the electrodes of Cochlear’s Nucleus Freedom device into Alex’s cochlea and tested them during surgery to be sure they were working. So as to avoid an extra surgery, Parisier also replaced the tubes that Alex had received nearly a year earlier, which added some time to the operation. Then he sutured Alex’s skin back over the mastoid bone, leaving the implant in place.
The internal implant looks surprisingly simple. But, of course, the simpler it is, the less can go wrong. It has no exposed hard edges. There’s a flat round magnet that looks like a lithium battery in a camera. It is laminated between two round pieces of silicone and attached to the receiver/stimulator, the size of a dollar coin. Then a pair of wires banded with electrodes extends from the plastic circle like a thin, wispy, curly tail.
More than four hours after the surgery began, Alex was returned to us in the recovery room with a big white bandage blooming from the right side of his head. As he had been all through the previous year of testing and travail, he was a trooper. He didn’t cry much. He didn’t even get sick from the anesthesia, as if somehow he had decided to do this with as little fuss as possible. When he was fully awake, we took him upstairs to the bed he’d been assigned, where I could lie down with him. It was a few more hours before they let us go home, but finally I wrapped him in his blanket and carried him back into the car, which once again Mark had waiting at the curb.
It was done.
PART TWO
SOUND
14
FLIPPING THE SWITCH
What did you do today?” I asked.
It was late on a March afternoon, one month before Alex’s third birthday. Matthew, Alex, and I were sitting at the table having a snack. Immediately, Matthew launched into a detailed recitation of a day in the life of a four-year-old preschooler.
“We played tag in the yard and I was it and I chased Miles and we had Goldfish for snack and I made a letter ‘B’ with beans and …” Matthew loved to talk. Words poured out of him like water. When he stopped for a breath, I turned to Alex, sitting to my left.
“Alex, what about you?”
I didn’t expect an answer, but I always included him. This time, he surprised me.
r /> “Pay car,” he said.
“Pay car. Pay car,” I murmured to myself, trying to figure out what he was saying. “You played cars?”
“Pay car,” he repeated and added, “fren.”
“With friends?” I asked. “Which friends?”
“Ma. A-den.”
“Max? Aidan? You played cars with Max and Aidan,” I exclaimed. “That’s great.”
Then Alex began to sing.
“Sun, Sun … down me.”
As he sang, he held his hands to form a circle and arced it over his head like the sun moving across the sky.
It was a very shaky but recognizable version of the “Mr. Sun” song, complete with hand movements, which his teachers often sang on rainy days.
I sang it back to him.
“Mr. Sun, Sun, Mr. Golden Sun, please shine down on me!”
His face brightened as he made the sun rise and set one more time.
We’re talking! I realized. Alex and I are having a conversation!
• • •
Two months earlier, in the middle of January, four weeks after Alex’s surgery—enough time for the swelling to go down and his wound to heal—we had visited the audiologist to have his implant turned on, a process called activation. It means adding external components and flipping the switch.