I Can Hear You Whisper Page 15
For most of the 1970s, Clark’s group labored over a workable prototype of a multichannel device he called a “bionic ear.” The solution to one particularly stubborn surgical question—how to insert the device safely and fully into the snail-shaped cochlea—came to Clark during a vacation. While his children played on the beach, he collected a series of spiral-shaped shells and a variety of grasses and twigs, and tried to stuff the grasses through the shells. He discovered that materials that were stiff at one end and flexible at the other worked perfectly. In the lab, his team created an electrode bundle that mimicked the grasses he’d found—stiff at the base but increasingly flexible toward the tip. Other challenges included reducing the necessary circuits from an original diagram wider and higher than a grown man’s torso to a tiny silicon chip.
The resulting device looked as homemade as it was. Bigger, lumpier, and less refined than the corporate versions that followed, the implanted piece consisted of a gold box a few centimeters square containing the electronics to receive signals and stimulate the internal electrodes. The electrodes were attached to the stimulator by a connector so that in the event of a failure, only the gold box would have to be replaced, not the electrodes in the inner ear. The whole package was encased in silicone to protect against corrosive bodily fluids.
But they still had to figure out what signals to send through this new device. “The way sound stimulates the inner ear is different from the way in which electrical currents stimulate the nerves,” explains Clark. “When you put electrical current into the nerves, it tends to stimulate them all at one time.” The answers wouldn’t really come until after they had implanted their first patient.
That was another problem. They needed patients, and they didn’t have any. Because of the fund-raising campaign, which included telethons, the public profile of their work was high. But doctors who saw patients refused to refer anyone. They thought Clark had overstated what was achievable. “I think I said there would be about five thousand people in Australia who could benefit from a cochlear implant,” says Clark with a laugh. In fact, he understated the eventual demand by several orders of magnitude—an estimated 320,000 people use them worldwide today. But at the time, in Australia as in the United States, the response in medical and scientific circles was skeptical.
Clark was getting desperate until he met a forty-eight-year-old man named Rod Saunders who had been visiting the deafness unit in Clark’s own hospital. Saunders had lost his hearing in a car accident twelve months earlier, when lumber he was carrying in his car—precariously jammed between the seats—smashed into his head when he collided with a light pole. His injuries left him completely deaf in both ears. He had seen Clark’s project in the news. When he came in for an appointment one day, he asked at the front desk whether he could see Professor Clark. “No,” he was told. “We don’t recommend it.” Standing within earshot, however, was an audiologist named Angela Marshall, who was spending some of her time in Clark’s laboratory. She pulled Saunders and his wife aside and said she could help. As Clark puts it, “Angela smuggled Rod up.”
The second patient, George Watson, managed to get to Clark directly. He was a World War II veteran and had been profoundly deaf for thirteen years after losing his hearing progressively following a bomb blast. Both Saunders and Watson felt they had little to lose, and both found their deafness debilitating. Saunders, who couldn’t speechread well at all, called it a “nightmare.” He told Clark what he missed most was hearing the voices of his family. “I miss music,” he said. “I even miss the sound of the dog barking.” Watson described his feeling of isolation. “You feel completely alone,” he said. “I mean you go to a football match, people cheering and so forth, or you go to a race meeting, but it is all so very even all the time… . Actually, it is very boring. Everything is the same, nothing seems to alter.” Even these conversations with Clark were awkward, requiring a combination of speechreading and written questions.
“We were really fortunate with the first two guys,” says Richard Dowell, who today heads the University of Melbourne’s Department of Audiology and Speech Pathology but in the late 1970s was a twenty-two-year-old audiologist working for Clark on perception testing with Saunders and Watson. “They were both sort of laconic characters, not too fussed about anything. They had to go through a lot of boring stuff and things didn’t [always] go right. They were basically guys who’d put up with anything and continue to keep coming in and support the work. They didn’t necessarily want anything out of it for themselves.”
Clark decided to implant Rod Saunders first and scheduled surgery for August 1, 1978. He and the surgeon who would assist him, Brian Pyman, had repeatedly rehearsed the steps of the operation on human temporal bones from the morgue. When he could practice no more, Clark went away with Margaret for a prayer weekend. The operation lasted more than eight hours. As Saunders recovered that night in the ward, Clark called the night nurse every few hours to check on his condition, but all was well. Saunders went home after a week.
Three weeks later—enough time for the surgical wound to heal—Saunders returned to the hospital to have the prosthesis turned on. They put the external coil in position and turned on the electrical current, gradually increasing its strength.
It didn’t work.
“Rod, do you hear any sound?” Clark asked.
Saunders was listening intently, but he answered dejectedly. “I’m sorry. I can only hear the hissing noises in my head.”
Depressed and worried, Clark and his colleagues had to send Saunders home. When he returned a few days later, results were no better. There were several sleepless nights for Clark. Finally, before Saunders’s third appointment, engineer Jim Patrick discovered a fault in the test equipment and repaired it—to everyone’s relief. “We approached the next session with great anticipation,” remembered Clark.
This time Saunders heard sounds. Each of the electrodes was tested and they were all working. The sounds were limited, but a cause for celebration nonetheless. Clark and Pyman took the surgical nurses and their spouses out for a Chinese dinner to mark the occasion.
At the next session, they wanted to know whether Saunders could recognize voicing and the rhythm of speech. They used the computer to play songs through the implant, beginning with the Australian national anthem, “God Save the Queen.” Immediately, Saunders stood to attention, pulling out the wires connecting him to the computer when he did. Everyone laughed with relief. Then someone suggested they try “Waltzing Matilda.” Saunders recognized that as well. It was solid progress, but Saunders still couldn’t understand speech and he didn’t seem to be recognizing different pitches. The design of this device was predicated on the idea that multiple electrodes laid out along the cochlea would deliver variations in frequency that would enable users to hear the sounds of words. But Saunders described the different signals he was hearing as “sharp” at high frequencies and “dull” at the low end, but not higher or lower than one another. “The sensations were changing in timbre, not pitch,” explains Clark. He couldn’t help but wonder: “Have we gone to all the trouble to produce a multichannel system … and it didn’t work?”
An engineer named Jo Tong was one of Clark’s closest collaborators in those early days, and Tong had taken the lead in designing the speech processing program that determined the instructions sent into the new device. He and Clark began with the belief that the cochlear implant had to try to reproduce nearly everything that goes on in a normal cochlea. Like a glass prism breaking up light into all the colors of the rainbow, the cochlea takes a complicated sound such as speech and breaks it into its component frequencies. The internal electrodes of the implant were designed to run along the cochlea, mimicking the natural sequence of frequencies. Clark’s team was betting that the place that was stimulated was critical to delivering pitch to the user, and pitch was critical to understanding speech. Initially, Tong designed a processing program that stimulated every electrode, on the theory that all parts of yo
ur basilar membrane are perceiving sound at once in normal hearing and that the appropriate areas would react more strongly to the appropriate frequencies. But since electrical stimulation is far less subtle than the workings of a normal cochlea, the result had proved incomprehensible.
So Tong and Clark hit on plan B, which was to try to extract the elements that convey the most information in speech and send only those through the implant. The new version was almost as simple as the first one had been complex. It made use of formants, the bands of dominant energy first described at Bell Labs that vary from one sound to another. If we produced sounds only with the larynx, we wouldn’t get anything but low frequencies. However, those low frequencies contain harmonics up to ten or twenty times higher. If the larynx is vibrating a hundred times per second—at 100 Hz—the harmonics are at 200 and 300 Hz, up to 2,000 Hz or higher. “As you move your lips and tongue and open and close the flap that’s behind your nose to create speech sounds, all those things modify the amplitude of all the different harmonics,” says Hugh McDermott, an acoustic engineer who joined Clark’s team in 1990. For each sound, the first region where the frequencies are emphasized—where the energy is strongest—is called the first formant, the next one going up is the second formant, and so on. The new speech processing program was known as F0F2 (or “F naught, F two” when Australians speak of it). That meant that the program extracted only two pieces of information from each speech sound: its fundamental frequency (F0) and the second formant (F2). “It’s the first two formants that contain nearly all of the information that you need to understand speech,” says McDermott. The second formant is also difficult to see on the lips, so it was a particularly useful extra piece of information. “If you have to nail down just one parameter,” says McDermott, “that’s the one to choose.”
F0F2, in other words, was lean and mean. Recognizing speech on the basis of a few formants is like identifying an entire mountain range from the outline of only its two most distinctive peaks. It worked by having one electrode present a rate of electric pulses that matched the vibration—the fundamental frequency—of the larynx. Then the second formant, which might be as high as 1.5 kHz (kilohertz), was represented on a second electrode. That second formant moved around from electrode to electrode according to the speech sounds created. F0F2 sounded even more mechanical and synthetic than implants do today, and it took a lot of getting used to, but it worked because only one electrode was on at a time, eliminating the problem of overstimulation.
With this new processing system in place, Saunders began to understand some limited speech. He could be 60 to 70 percent correct on tests that asked him to identify the vowels embedded in words: “heed,” “hard,” “hood,” “had,” and so on. As the end of 1978 neared and money was running short again, Clark insisted they try Saunders on a harder test: what’s known as open-set speech recognition. Until then, they had done only closed sets—reciting words that were part of familiar categories, such as types of fruit. Speech in real life, of course, isn’t so predictable; open-set testing throws wide the possibilities. Angela Marshall was hesitant, fearing it wouldn’t work. “I said, if we fail, we fail,” says Clark. As the group stood watching with bated breath, Marshall presented one unrelated word at a time.
“Ship,” she said.
“Chat,” replied Saunders. Completely wrong.
“Goat.”
“Boat,” said Saunders. Closer.
“Rich.”
“Rich,” said Saunders. He had gotten one right!
By the end of the tests, Saunders had gotten 10 to 20 percent of the open-set words correct. That’s not the least bit impressive by today’s standards, but it was hugely significant at the time for a man who was profoundly deaf. Clark was overcome. “I knew that had really shown that this was effective,” he says. He pointed down the hallway of the hospital where we sat. “I was so moved, I simply went into the lab there and burst into tears of joy.” The only other time he cried in his adult life, he told me, was when he and Margaret worried over the health of one of their five children.
George Watson became the second Australian to be implanted, in July 1979. His early audiological results were promising enough that Lois Martin, who had taken over for Angela Marshall, “decided to go for broke,” says Clark, and read Watson some lines from the daily newspaper, then asked him to repeat what she’d said. “He’d nearly got it all right,” says Clark, who was sitting in his office up the hall working at the time. “There was great excitement and they came rushing up the corridor to tell me,” he remembers. They had wondered what Watson’s brain would remember of sound. “George was showing us that he could remember the actual sound that he’d heard thirteen years before.” Clark had only two patients at this point, “but the two of them together told us a lot about what was possible.”
Not that there weren’t some disasters. Until that time, both Saunders and Watson were only able to use their implants in the laboratory, hooked up to the computer. Clark instructed an engineer named Peter Seligman to develop a portable device that they could take home. “It was as big as a handbag,” remembers Dowell. “We thought it was fantastic.” They called a press conference to announce that two patients were successfully using a portable cochlear implant.
“That day, George’s implant failed,” says Dowell. “I was preparing him to go out there to talk to the press, and he told me that it had stopped working.” Watson reckoned he could wing it using his speechreading skills. And he did. Watson told the assembled reporters how wonderful his implant was. “None of those guys who were there that day would have had a clue that he wasn’t actually hearing anything,” says Dowell. Rod Saunders, on the other hand, although his implant was working, appeared to be struggling more because he had such a hard time reading lips. “It was reported as a great breakthrough,” Dowell says, laughing. “It’s so long ago I can tell the story.” Clark, too, is willing to tell the story today. “If they’d asked, I would have to have said the implants failed. I just sort of held my breath and they didn’t ask.”
• • •
Meanwhile, in San Francisco, another team had been pursuing a similar path. Auditory neuroscientist Michael Merzenich had been recruited to the University of California, San Francisco, in part because a doctor there, Robin Michelson, was pursuing a cochlear implant. Two of Michelson’s patients participated in the review performed at the University of Pittsburgh by Robert Bilger. The head of UCSF’s otolaryngology department, Francis Sooy, thought the idea of cochlear implants had merit, but that it needed a more scientific approach.
“When I showed up at UCSF, I was intrigued by the idea, but when I talked to Michelson, I realized that he understood almost nothing about the inner ear and nothing about auditory coding issues,” Merzenich told me. “He had the idea if we just inject the sound, it will sort it out. He talked about what he was seeing in his patients in a very grandiose way.” As an expert in neurophysiology, Merzenich found such talk hard to take, especially since he was engaged in a series of studies that revealed new details about the workings of the central auditory system. He discovered that information wasn’t just passed along from level to level—from the brain stem to the cochlear nuclei to the superior olive and so on. Along the way, the system both pulled out and put back information, always keeping it sorted by frequency but otherwise dispersing information broadly. “Our system extracted information in a dozen ways; combined it all, then extracted it again; then combined and extracted again; then again—just to get [to the primary auditory cortex],” Merzenich later wrote. “Even this country boy could understand the potential combinative selectivity and power of such an information processing strategy!” On the other hand, he could also see how hard it would be to replicate.
Robin Michelson, it must be said, was a smart man. He originally trained as a physicist, two of his children became physicists, and his uncle Albert Michelson won the Nobel Prize in Physics. Several people described him to me as a passionate tinkerer, with gra
nd ideas but not always the ability to see them through. Merzenich couldn’t really see how such a device as Michelson described would ever be usable clinically as anything more than a crude aid to lipreading. Unresolved safety issues worried him as well. “The inner ear is a very fragile organ,” he says, “and I thought that surely introducing these electrodes directly into the inner ear must be pretty devastating to it and must carry significant risk.”
Besides, Merzenich was soon busy conducting what would turn out to be revolutionary studies on brain plasticity. That was one of the reasons I had been so eager to talk to him. For someone like me, who wanted to know about both cochlear implants and how the brain changed with experience, there was no other researcher in the world who had played such a major role in both arenas. By the time I wrote to him, Merzenich was serving as chief scientific officer (and cofounder) at a scientific learning development company called Posit Science, whose flagship product is BrainHQ, a brain training platform. We met in their offices in downtown San Francisco. “If you think about it,” he told me, “[the cochlear implant] is the grandest brain plasticity experiment that you can imagine.”
As a postdoctoral fellow at the University of Wisconsin, Merzenich had participated in an intriguing study showing changes in the cortex of macaque monkeys after nerves in their hands were surgically repaired. He hadn’t quite tumbled to the significance of these findings, but shortly after he arrived at UCSF in 1970, he decided to pursue this line of research as well as his auditory work. Together with Jon Kaas, a friend and colleague at Vanderbilt University, Merzenich set up a study with adult owl monkeys. By laboriously touching different parts of the hand and recording where each touch registered in the brain, they created a picture that correlated what happened on the hand to what happened in the brain. With the “before” maps complete, the researchers severed the medial nerve in the monkeys’ right hands, leaving the animals unable to feel anything near the thumb and nearby fingers. (Here, too, the requirements of science are discomfiting, to say the least.) According to standard thinking on the brain at the time, that should have left a dead spot in the area that had previously been receiving messages from the nerve. A few months later, after the monkeys had lived with their new condition for a time, Merzenich and Kaas studied the animals’ brains. Completely contrary to the dogma of the day, they found that those areas of the brain were not dead at all. Instead, they were alive with signals from other parts of the hand.