I Can Hear You Whisper Page 22
The difference was plain to see from the start. “There are only a few times in a career in science when you get goose bumps,” Dorman once wrote. That’s what happened to him when his patient Max Kennedy, who had been participating in clinical trials for the Ineraid, traveled with Dorman to North Carolina to try out the new program for Wilson and Finley. Kennedy was being run through the usual set of word and sentence recognition tests. “Max’s responses [kept] coming up correct,” remembered Dorman. “Near the end of the test, everyone in the room was staring at the monitor, wondering if Max was going to get 100 percent correct on a difficult test of consonant identification. He came close, and at the end of the test, Max sat back, slapped the table in front of him, and said loudly, ‘Hot damn, I want to take this one home with me.’”
Ultimately, he did. Although every company has developed their own particulars, all cochlear implants today use processing programs based in large part on CIS.
• • •
Even with better speech processing, however, the sound implants provide isn’t much like natural hearing. “It’s a fascinating plasticity question,” says Elissa Newport, the Georgetown professor who studies language acquisition and miniature languages. “The information you get out of a cochlear implant is very different than the information you get from hearing, and so the brain either has to really adapt in some way that we don’t understand, or it doesn’t totally but it does somewhat. It’s really a very big adjustment that the brain needs to make in order to acquire language from the kind of input signals that a cochlear implant produces.”
For Newport, the question is of more than just professional interest, since her husband, Ted Supalla, is deaf and a prominent ASL scholar. She worries about deaf children who don’t succeed with implants—“the ones who don’t show up on Good Morning America”—and, as a scientist, is skeptical of talk of miracles. “I don’t believe in miracles; I want to know what the mechanism is.”
One lesson from Newport’s own research, however, is just how resilient children are when it comes to learning language. She first brought this up while we were discussing sign language and the poor ASL skills of most hearing parents. Her studies have found that parents didn’t have to be all that good at signing for the child to benefit—results later echoed in her work on miniature languages. “When you’re a kid,” says Newport, “your input doesn’t have to be perfect, perfectly regular, or perfectly fluent in order for you to become very fluent in your language. Inconsistencies, imperfections in the input, don’t seem to deter kids when they’re young.”
“Isn’t that also an argument for cochlear implants?” I ask.
Yes and no, she says. “That doesn’t mean you can throw anything at the brain and it’ll figure out how the patterns work.”
Don Eddington was concerned about this very thing in the early stages of his work. “One worried a lot about making kids worse off in a sense,” he says. “The impact of those very weird signals on brain organization was a big question.”
To find out what that impact really was, I went to see Mario Svirsky, an electrical engineer at New York University. His interest in the problem originated at the opposite end of the spectrum from Newport—he was fascinated with the technology. As a teenager in Uruguay, Svirsky was captivated by a science-fiction article that imagined artificial vision, and he decided then that he wanted to help make such a project a reality. Upon arriving in the United States for graduate school, he found no one was really working on artificial vision (though they are now). After a brief flirtation with neural prostheses for paraplegics, Svirsky ended up studying cochlear implants. Despite his engineering bent, he has thought philosophically about questions like Newport’s. A slight man with a trim beard and an earring, Svirsky has a gentle, considered air about him, which served him well as he waded into the thorny controversies surrounding implants. He has collaborated with the outspoken anti-implant psychologist Harlan Lane, and Svirsky’s copy of Lane’s book When the Mind Hears bears the inscription “To Mario, Whose Mind Hears.” Says Svirsky, “We prodded each other’s thinking.”
Among other things, Svirsky studies the brain’s adaptation process to the degraded signal provided by implants. This work does not answer Newport’s question about children directly—by necessity, the work on adaptation is done with adults who were deafened after they had language, because they have something with which to compare the implant signal—but it provides useful information. “The signal is degraded in specific ways,” Svirsky explains. “One is spectral degradation, meaning you have fewer channels of independent information.” This is essentially the problem presented by having fewer keys on the piano with which to make the same music. “The other [problem] is frequency shift, meaning that the whole signal can be shifted to higher or lower frequencies or distorted in different ways.” In this case, the middle C on your piano might sound like A or like high C. The cochlear implant was designed to mimic and thereby take advantage of the organization of the human cochlea, with different pitches lined up like piano keys. Electrodes that are closer to the apex of the cochlea are associated with lower frequencies, those near the base with higher frequencies. Or that’s the idea. “Other than the fact that both maps are tonotopic, there are no guarantees that they are a perfect match or even a good match,” says Svirsky.
Fortunately, most people’s brains do seem able to adapt with practice the way I did to the distortions Poeppel played for me. There are cases, though, where the listener simply cannot make sense of the new signal. For them, Svirsky is creating an approach to fitting implants in which the recipient can fiddle with dials in real time to get the signal to the optimum level for the user’s particular understanding. Turning the dial changes the assigned frequencies of the electrodes. The results are not always logical. One woman understood speech better when her electrodes began at higher frequencies, essentially leaving out the lower levels. “If you talk to a phonetician, they will tell you that by getting rid of the frequencies below seven hundred hertz, you’re eliminating a lot of very useful information for speech perception,” says Svirsky. “However, she did better with this because [it] required a lower level of adaptation.”
Like Poeppel, Svirsky has some favorite examples of degraded signals that are particularly related to cochlear implants. The best one includes a visual representation of the problem. In a set of four portraits of Abraham Lincoln, the first is untouched and represents “normal.” The second is a heavily pixelated image that is degraded spectrally, with fewer channels of information provided. “You can take an image, get rid of most of the information—this has maybe a thousandth of the original—and yet it’s recognizable to any American.” Then Svirsky plays me the paired audio recording, which sounds like the computer-altered voice of a kidnapper demanding a ransom. Even so, I can hear that the kidnapper is reciting the Gettysburg Address: “Four score and seven years ago our fathers …” The next picture of Lincoln has been frequency-shifted and it has a slightly swirly, hallucinatory aspect but is still clearly Honest Abe. Now it sounds like Donald Duck is doing the reciting: “… Brought forth upon this nation …” In fact, Donald Duck comes up a lot when new implant users try to describe what they’re hearing. The fourth and final image has been manipulated both ways; it is degraded spectrally and its frequencies have been shifted. The picture no longer looks much like Lincoln at all but like a series of black and gray smudges. The accompanying recording is unintelligible, at least the first time through. “You get the sense that this may be something that you might be able to learn, given enough time,” says Svirsky. “But the learning process would not be trivial.”
“Are you suggesting that some cochlear implant users are hearing the world like that?” I ask.
“Like that, perhaps worse,” he says. Then, seeing my dismay, he adds, “It gets easier.”
I have to remind myself that even though the kidnapper and Donald Duck delivered very poor renditions of the Gettysburg Address, I heard and understood it
.
• • •
By the time I started looking for them, the small group of children who had been in the first clinical trials of cochlear implants were adults in their twenties. I found some obvious success stories. Caitlin Parton graduated from the University of Chicago and went on to law school with the hope of working on disability rights issues. Matt Fiedor, who received a single-channel implant from Bill House, did remarkably well with that more limited device. He even majored in Japanese! Mark Leekoff, who had been jeered at the Jeopardy! taping, told me he struggled socially when he was young, but at Tufts University he found both a group of friends and academic success. I met him at West Virginia School of Medicine, where he is the first deaf student ever enrolled. The school has been creative about solving some of the issues that have cropped up. They let Leekoff use an iPad app with his stethoscope, since listening to a heartbeat is still beyond the capabilities of his implant. For his surgical rotation, they experiment with clear surgical masks, since Leekoff understands best by combining his implant with reading lips.
But there are stories on the other side as well. Implants didn’t work for everyone. I met Peter Hauser at Rochester Institute of Technology, where he heads the Deaf Studies Laboratory, to talk about his work as a neuropsychologist. He was willing to tell me his own story, too. Now in his forties, with an animated face and an easy laugh, Hauser lost his hearing at five from meningitis. As a teenager, he participated in not one but two clinical trials. His prospects were mixed, since he’d had hearing for several years but had also been deaf for quite a while. In 1983, he got a single-channel implant. “Every syllable sounded like a beep,” he tells me through an interpreter. “So baseball was two beeps, but the beeps could be loud, quiet, high, low. In the soundproof booth at the audiologist’s office, when only one sound was given at a time, I could identify the beep fairly well, the rhythm of the beeps. But when they opened the door, it was all over. I couldn’t discriminate the background sounds from the beeps or anything.” In the end, that implant was too distracting and Hauser gave up after a year. A few years later, he had the first device replaced with a multichannel implant. “With the twenty-two channel, it was like twenty-two different beeps,” he says. He used it for about nine months and, again, gave up. Instead, Hauser, like many others, found a home and a career in Deaf culture when he went to Gallaudet for graduate school.
Individual experiences can’t be relied on to tell the whole story, of course, or even much of it. If you know one child with a cochlear implant, I’ve heard it said, you know one child with a cochlear implant. Even though children have been receiving implants for more than twenty years now, there is still enormous variability in outcomes. On the other hand, as children get implants at younger and younger ages and get better support from audiologists and speech pathologists, there are far fewer really poor performers than there once were. “The difference is for the little ones,” says Don Goldberg, an expert in auditory rehabilitation at the Cleveland Clinic and president of the AG Bell Association. “We’ve seen an impressive change.”
The hope was always that implants would improve access to education. The consensus among researchers is clear: Most deaf children benefit from a cochlear implant. “Kids with implants are doing better on average than kids without implants,” wrote Marc Marschark, an expert on deaf education at the National Technical Institute for the Deaf, one of nine colleges at Rochester Institute of Technology. However, he adds, implants do not transform deaf children into hearing children. “[They] still generally perform behind their hearing peers.”
In an important study published in 2000, Mario Svirsky and his colleagues at Indiana University, where he worked in the well-respected cochlear implant program before moving to NYU, showed exactly that. The researchers examined not just whether a child’s language development improved after receiving a cochlear implant but also whether it improved more than it would have without an implant. “Language development” is an all-encompassing measure that goes beyond speech production (or intelligibility) and speech perception to include more sophisticated skills such as reading comprehension and grammar. In a normally developing child, chronological age and “language age” rise in lockstep, so that at one year of age a child has a language age of one year, at three years old he has a language age of three, and so on. A graph depicting the maturation of language will show a straight diagonal line from the bottom left corner stretching to the top right. Historically, profoundly deaf children developed language at half the rate of hearing children. “Without an implant, these children would be expected [in English] to speak like a two-year-old when they are four and so on,” explains Svirsky. So their language development sheared off from their hearing peers from the start, and the gap only widened over time. “What I found was once they received an implant,” he says, “their development started proceeding at a normal rate on average.” In other words, the gap stopped widening and the line of language development for a child with a cochlear implant now paralleled the diagonal for a typically hearing child.
In another study, Svirsky showed that the vast majority of children with cochlear implants eventually reach 80 percent or better on open-set tests of speech production. He tested this by taping kids saying a set of standard sentences and playing those recordings to a panel of “naive listeners” who had no experience listening to the speech of the deaf. In such a test, a typically hearing child will score 80 percent at the age of four, 90 percent at five, and then go higher. In Svirsky’s study, children implanted in the first two years of life all achieved a score of 80 percent or better by the age of eight. Those who didn’t get their implants until the age of three did not do as well.
It was a logical conclusion that age mattered and that the earlier a child received an implant, the less ground she would have to make up. A 2010 study headed by Dr. John Niparko, who directed the cochlear implant program at Johns Hopkins University at the time (he is now at the University of Southern California), showed why earlier was better. It measured spoken language development in 188 children at six centers around the country. As with Svirsky’s study, the children did better on spoken language development than they otherwise would have but did not catch their hearing peers. Those who got implants under eighteen months had “significantly higher rates of comprehension and expression” than those who were implanted between eighteen and thirty-six months, though not everyone succeeded to the same degree, and that second group did better than those implanted after thirty-six months. But Niparko found that several other things also helped kids do well, particularly strong parental involvement, higher socioeconomic status, and greater residual hearing.
Back in 2003, some of those factors showed up in an influential study by Ann Geers and colleagues at the Central Institute for the Deaf in St. Louis. Geers studied 181 eight-and nine-year-old children who received implants between ages two and five. After testing comprehension, verbal reasoning, narrative ability, and spontaneous language production, either in speech or sign depending on the child’s preferred language mode, Geers found that some of the same factors that predict success in hearing children were helping deaf children, too—greater nonverbal intelligence, smaller family size, higher socioeconomic status, and female gender all boosted performance. She also found that those who were educated in oral classrooms had better language development than those who weren’t.
In 2008, Geers released a study of 112 of the same students, who were then in high school. It is one of very few studies to track students over such a long time, because researchers are still waiting for most implanted children to grow up. “Performance of these students far exceeds expectations for children in previous generations,” wrote Geers and colleagues. But academic difficulties remained. Speech recognition and intelligibility got worse for most students in noisy conditions like those of a classroom. Gaps in IQ scores—both verbal and nonverbal—persisted between deaf and hearing children. Most implanted children made consistent progress in reading that para
lleled hearing peers, but 20 percent made minimal progress in the eight years between studies, staying at the “fourth-grade barrier.” And writing, including spelling and expository skills, was difficult for most of the implanted children; only 38 percent scored even close to the hearing students. Early implantation, parental involvement, higher socioeconomic status, and oral education all helped those who’d done best.
• • •
Who your parents are, when you get your implant, how you’re educated—these are clearly determining factors of success. Yet there is still surprising variability in performance even for kids who start out in similar situations. According to David Pisoni, a cognitive neuroscientist at Indiana University who is particularly interested in why that variability persists, roughly 15 to 20 percent of cochlear implant recipients will not do well with the device. Nor have cochlear implants erased the problems of deaf education.
Despite a concerted effort to come up with educational solutions over the past few decades and reams of academic papers on a wide variety of classroom-related subjects, there has been distressingly little measurable progress, as if deaf academic achievement is a brick wall with no path over, around, or through. “The only thing we do know is that the median reading level of deaf eighteen-year-olds in the US has not changed in forty years,” Marc Marschark told me in 2012. As I had found back when I was searching the Internet, that median reading level, the middle ground, is stuck in the fourth grade.