I Can Hear You Whisper Read online

Page 19


  Fernandes, who had grown up deaf but hadn’t learned to sign until she was twenty-three, responded by saying that the protesters were locked in a narrow view of deafness and that in their eyes, she was “not deaf enough.” A flyer pasted around campus, however, said: “It’s not that she’s not deaf enough. She’s not enough of a leader.” Looking at the small tents protesters had pitched on the lawn, Fernandes told a New York Times reporter that her vision for the university was as “one big tent for all.” Recognizing that the deaf community was facing change from cochlear implants and mainstream education, she argued that Gallaudet had to embrace “all kinds of deaf people.” ASL would always be central to Gallaudet, but the university had to welcome people who hadn’t grown up signing as well. “We’re in a fight for the survival of Gallaudet University,” she said.

  The conflict carried on for months, quieting in the summer and then exploding again in the fall, and I continued to follow it closely. In October, more than one hundred students were arrested for taking over a campus building, and a few began hunger strikes. As in 1988, the protests finally shut down the university for several days. The clash, noted one reporter, “is illuminating differences over the future of deaf culture writ large, and focusing attention on a politically charged debate about what it means to be deaf in the 21st century.”

  That was certainly a question I was asking myself.

  At the end of October, the protesters won again—at least on the surface—when the board revoked Fernandes’s appointment. Students jubilantly celebrated on the university’s football field, home of the team credited with inventing the huddle so that their opponents couldn’t see their signs.

  I was left unsure as to what had really happened. The victory seemed much less clear than in 1988. I had no way to judge Fernandes other than through the news reports, but the demonstrations themselves seemed as problematic as the process they were protesting. Were students going to shut down the university every time they didn’t like a board decision? Pragmatically speaking, Fernandes had a point. The pace of technological innovation was unrelenting. It was forcing change on the deaf community. Would a child like mine ever want to go to Gallaudet or another school for the deaf? Would he have to reject his cochlear implant to do it? The devices were so unpopular on campus that students (and professors) were said to feel they couldn’t wear them there.

  At the culmination of the protests, I read a comment from Lawrence Fleischer, then-chairman of the Deaf Studies Department at California State University, Northridge, another important center of deaf higher education. “More parents are choosing cochlear implants for their children,” he said. “We call it the false hope. We call it the magical consciousness, meaning that their consciousness is way below average, but they’re pretending to have consciousness they don’t really have.” I wanted to like Deaf culture, but his comment left me cold. Granted it had been less than a year since Alex had begun using his implant, but “false hope” was so contrary to our experience that it was hard to take it seriously. Alex was making steady progress. I was willing to accept that success was variable, but I knew Alex was not alone. To dismiss anyone who did well seemed like willful ignorance.

  At least one truth behind the protests emerged later. Gallaudet was a university in trouble and not just because of the cochlear implant. King Jordan was leaving a mess. Though he was once a hero, the faculty voted no confidence in him as well that fall. In November 2006, one month after Fernandes’s appointment was revoked, the Middle States Commission on Higher Education, the body responsible for accrediting degree-granting colleges and universities, expressed “serious concern” about the state of affairs at Gallaudet and postponed a decision on reaccreditation. For the next two years, the school was on probation. Concerns included not just the recent presidential search process but also the need to nurture a climate of respect among students, faculty, and administrators, the need for a strategic plan, evidence of academic rigor, and more rigorous reporting to the commission. The numbers spoke for themselves. Less than 30 percent were graduating in six years. The following year, 2007, only slightly more than half even made it to sophomore year. Most students needed remedial help in English and math. As a result of the protests, the probation, and a subsequent effort to tighten admission standards, undergraduate enrollment dropped from 1,600 in the mid-1990s to 1,080 in 2007. Fernandes had been right about one thing: Gallaudet, and perhaps by extension Deaf culture, really was fighting for its survival.

  Discouraged, I wondered anew about the reasons for such pervasive academic struggles. For Alex, for the time being, I resolved to stay focused on sound.

  16

  A CASCADE OF RESPONSES

  Dressed in a cotton smock, stripped of my earrings, watch, belt, and all other sources of metal, I am lying on a blue-sheeted bed with my head cradled in what looks alarmingly like a heavy-duty toilet seat. Above me, through an opening in the thick, white, fiberglass C that surrounds my head, I can see a computer screen booting up. It’s actually the projected image of a monitor mounted horizontally so it can be seen from the bed. A real computer is electrical and can’t be in here with me.

  “Are you okay in there?” a man calls to me through a speaker.

  “Just great,” I answer.

  A pause. “We are watching.” A small laugh.

  Deep inside the second floor of New York University’s Department of Psychology, I have become a research subject. I came to see David Poeppel, whose interests as a researcher align almost perfectly with mine as Alex’s mother, since Poeppel studies both sound and language. Lanky and boyish except for the sprinkling of gray in his hair, and usually dressed in jeans and black-framed glasses, Poeppel is two parts thoughtful MIT-educated linguist and neuroscientist and one part enthusiastic teenager who happens to have some really cool gadgets. He loves to wrestle with big ideas and their backstories. Happily for me, he loves to talk about ideas, too, in the kind of wide-ranging conversation that puts “blob” and “interdigitate” in the same sentence. Within twenty minutes of meeting me, Poeppel decides that the best way for me to understand what happens to sound when it reaches the brain is to see it for myself. “We can record your own hearing,” he says. And then he puts me inside the MEG machine.

  MEG, or magnetoencephalography, “one of the new boutique brain-imaging approaches” as Poeppel puts it, has him and his colleagues as thrilled as technophiles with their first iPads. At a recent conference, Poeppel exhorted the audience to “run, don’t walk, to the nearest MEG scanner.” The technology provides something approaching the spatial resolution of MRI—which tells us, says Poeppel, “what is here versus here versus here, but it’s not a theory or mechanism of how hearing and speech and language work, because they happen incredibly fast”—and it gives the temporal resolution (timing) of EEG, which “gives you millisecond by millisecond of what’s happening in your head, but you can’t figure out where it’s coming from.” With MEG, neuroscientists don’t have to choose between timing and location; they can have both.

  Wherever there is an electrical current, there is also a magnetic field curling around it. You can use the “right-hand rule” to determine its direction: Stick out your right thumb as if to hitchhike and curl your fingers toward your palm, and you have created a model of the electrical current—which moves in the direction your thumb is pointing—and the magnetic field, which follows your fingers to spin around the current. Magnetoencephalography measures the direction and intensity of the magnetic field surrounding the electrical one, thereby locating the current. In your brain, that is a feat roughly equivalent to recording a pin dropping while an orchestra is playing.

  Like an MRI machine, an MEG scanner is kept in a shielded room, but for opposite reasons. MRI artificially generates a large magnetic field that is potentially dangerous. Unleashed, it would erase credit cards and stop pacemakers. I once saw a technician demonstrate its power by holding a sheet of metal at the door of the MRI room. The square foot of metal flipped up i
nstantly and would have flown across the room like the Millennium Falcon in the grip of the Death Star’s tractor beam had the technician not held on with two strong hands. So the shielded walls are there to keep the magnetic field in. (Alex’s implant means he cannot undergo MRI.)

  At NYU, the MEG machine is inside a room whose walls, floor, and ceiling are about ten inches thick, with three layers of copper and a layer of a special alloy called mu-metal—all there because “the signal that we’re measuring in your head is so small, so unbelievably small, that we need to be keeping all the electricity—the elevator, the subway, the traffic—in our world out,” says Poeppel. It’s not just subways and elevators that obscure the signal; even the electricity in your body interferes. “Your heart generates a huge signal like the music at the beginning of Law & Order: Dunh, dunh. That’s a gigantic electrical spike. Of course, since it’s an electrical spike, it’s a magnetic spike. We don’t want that signal, since we’re studying the head. Even the heart is really big compared to what we’re trying to get from the head.”

  He is warming to his subject.

  “So here’s what’s cool cocktail party conversation … well, depends on the party,” he admits. “The range that we’re recording this is femtotesla.” A tesla (T), named after engineer and physicist Nikola Tesla, is the unit of measurement that describes the strength of a magnetic field. The superconducting magnet built around the CMS detector at CERN on the French-Swiss border to search for the Higgs boson particle is 4 T. Most MRIs range from 1.5 to 3 T, and a refrigerator magnet is about five millitesla. A femtotesla is a quadrillionth of a tesla. “Ten to the minus fifteen,” explains Poeppel. “Formally speaking, wicked small.”

  Inside the toilet bowl–shaped head cradle of this particular MEG machine, there are 160 detectors called SQUIDs, which stands for superconducting quantum interference devices. (There are also seated versions, with a helmet like an upside-down toilet bowl, that contain even more SQUIDs.) Each SQUID is the size of my pinkie and consists of a coil of a special wire suspended in liquid helium.

  Lying on the flatbed of the MEG, I don’t have to do anything at all. “If your brain is in there and it can hear, we’ll see it,” says Poeppel. He and his MEG technician, Jeff Walker, are sitting outside the shielded room at a bank of computers. Walker has inserted earbuds in my ear canals.

  “Do I have to lie still?”

  “If you squiggle your head around, the data suck.”

  “I’ll lie still.”

  “Ready?”

  “Ready.”

  I hear beeps. The same tone repeats. Beep, beep, beep. This goes on for a while. Then a second, higher-pitched tone begins. Beep, beep, beep.

  Poeppel describes this two-tone test as one of the “most boring tests there is.”

  It’s pretty boring. I am very purposefully doing nothing, not even concentrating on the beeps.

  But when we’re done and I join them at the computers, I can see from the results that my brain was plenty busy.

  “There is your brain on sound, actually,” says Poeppel. “We gave you five minutes’ worth of tones—high tones and low tones. I know you were fascinated.”

  For each tone, there are two graphs on the screen showing my auditory response. The first is a waveform map, with a tangle of sinewy black lines, one for each detector, undulating from left to right. The second graph shows the movement of the magnetic field. It’s called an isofield contour map and features a stick-figure outline of a head, with a triangular nose tacked on top and half-circle ears on the sides to show we’re looking down from above. Here, small blue diamonds dotting the cartoon head represent the detectors. With some quick adjustments, Walker cleans up the data and separates it so we can begin to look for patterns.

  “This is just the high tone,” says Walker, once he has created a neater graph with slightly fewer lines. The tone was at 1,000 Hz and I heard it one hundred times, but a few of those times the signal “had some sort of wonky weirdness, like maybe a channel went crazy or you coughed.” Clearing out the wonky weirdness left us with eighty-four lines. The undulations are more individual now, their separate characteristics clearer in places, but still a mass of black in the middle. The x-axis of the graph shows milliseconds, and the y-axis shows the wicked small femtotesla. Beginning at the start of the beep, which scientists call the onset, the lines roll along gently until about eighty milliseconds, when they suddenly begin to loop above and below the center line, as if bursting into a mass game of double Dutch, with the top of the jump rope arcing around 120 milliseconds. Then the lines slacken, stop their game, and amble along again with just enough wiggle to suggest that someone is still lightly flicking a wrist at the end of the rope.

  I remember seeing a similar peak in the electrical waveforms that Anu Sharma showed me at her laboratory in Colorado where she studies the first valley (the P1) and, in older children, the first peak (the N1) in EEG recordings.

  “That is the N1, and you have one,” Poeppel confirms. “So good news.”

  In MEG terms, he adds, the response is called the M100 or the N1m. Whatever the name, it represents the same thing: confirmation that the auditory cortex is processing sound. The sound doesn’t have to be natural. It can be conveyed by hearing aids or cochlear implants. The N1 takes some time to develop and is seen only in older children and adults. But the P1 shows up in very young children. That is precisely why it is useful to Sharma. As she had explained, this particular early response matures as a child ages, becoming faster and more efficient. By measuring how quickly the P1 appears, Sharma can tell whether a child is getting enough sound to develop spoken language long before that child says her first word. The technique can save precious time if hearing aids aren’t going to be effective.

  Researchers like Poeppel are trying to define what happens not in young children but in adults who are experienced listeners. They spend a lot of time on the N1 or its magnetic equivalent because, as Poeppel puts it, “it’s big ticket”—easy to see and packed with information. By glancing at an N1 response, for instance, someone who’s practiced at reading such data can tell whether the subject heard an “ah” or an “ee,” or whether she was paying attention. “You can unpack the nature of hearing and perception from this very elementary response,” says Poeppel.

  The two graphs from my test are linked on Walker’s computer screen. By moving the cursor along the lines of the waveform map, he can replay the movement of the magnetic field in my head on the contour map. As he does, concentric splotches of red and blue, depicted like a topographic map with the deepest colors at the center, change shape and intensity as they shift across the landscape of my brain. “This is the flat map of the thing,” says Poeppel. “You have to imagine it curved around your head.” The magnetic field in the brain is always present. Walker describes it as “a constant fluctuating ocean of activity.” And it’s large enough that it passes in and out of the brain as it moves. I imagine a ghostly halo revolving around and through my skull. On the map, blue represents the “sink,” the magnetic flux coming into the head, and red is the “source,” the magnetic activity coming out of the head. When a beep sounds, it’s like throwing a rock in that ocean of activity. Things change. The colors get deeper; the splotches concentrate and then swap places. Blue moves from front to back and red from back to front. Although there’s color all over, the activity seems most intense on the left, which is as it should be.

  “These are the temporal lobe channels for a sound. If we don’t see those, we’re not so happy,” says Poeppel. “You’re generating, happily, a very stereotypical boring pattern.”

  “What does the movement around the head represent?” I ask. “What are we seeing?”

  “Well, it changed direction. The current was shooting in one direction, presumably communicating information: ‘I was a tone of the following type.’ The next one is already shooting information a different direction.”

  “Okay, so the electrical current is zinging around my brain,” I sa
y. “But what does it mean from one shift to the next? What does it mean when it changes direction?”

  “The contour map is like a mountain range. Is the magnetic field going up the mountain or down into the ocean? It tells you the underlying brain activity is changing its direction.”

  But on a deeper level, he says, we still don’t know what that means.

  “This is what we’re working towards,” says Poeppel. “If I knew the answer to what this means, I’d be an exceptionally famous person.”

  • • •

  When I compare my MEG recordings with Alexander Graham Bell intoning vowels into tuning forks and Harvey Fletcher at Bell Labs having M.A. (male, low-pitched) and F.D. (female, high-pitched) speak the word “farmers” again and again, it’s obvious our understanding of sound and hearing has come a tremendous distance. Why, I wonder, do so many of the researchers I’ve talked to emphasize how much we don’t know? It’s such a common refrain that while writing this chapter, I heard a radio interviewer ask esteemed neuroscientist Eric Kandel of Columbia University what mysteries remain about the brain. “Almost everything,” answered Kandel.

  “What’s going on?” I ask Poeppel. “Surely we do know more.”

  “Look, I work on problems that were all defined in the nineteenth century. Most of what we know is footnotes to Helmholtz,” he says, referring to the German physicist Hermann von Helmholtz, who inspired Bell.

  Then he pauses.

  “No, that’s not quite right. We have made a lot of progress, but it’s in part because the finer-grain you look, the more new things come up.” As an example, he reminds me that more than one hundred years ago, Camillo Golgi and Santiago Ramón y Cajal, who shared a Nobel Prize for their foundational work in neuroscience, engaged in a great debate over whether the neuron was really the unit of operation in brain activity.