I Can Hear You Whisper Page 23
At least we’ve moved past the restrictions of the oral-only era. As Andrew Solomon put it in his New York Times article, “The insistence on teaching English only … served not to raise deaf literacy, but to lower it. Forbidding sign turned children not toward spoken English, but away from language.” In that article, Solomon quoted a deaf woman named Jackie Roth: “We felt retarded,” Roth said. “Everything depended on one completely boring skill, and we were all bad at it. Some bright kids who didn’t have that talent just became dropouts… . We spent two weeks learning to say ‘guillotine’ and that was what we learned about the French Revolution. Then you go out and say ‘guillotine’ to someone with your deaf voice, and they haven’t the slightest idea what you’re talking about—usually they can’t tell what you’re trying to pronounce when you say ‘Coke’ at McDonald’s.”
Such frustrations led students of Jackie Roth’s all-oral era— who are in their fifties or older today— to become leaders of the Deaf culture movement. But the younger generation that protested at Gallaudet in 2006 was educated differently, because the philosophy underlying deaf education had changed. Even before the Americans with Disabilities Act became law, separate was no longer deemed equal for any child with special needs, and a push for mainstreaming began in 1975, when Congress passed Public Law 94‑142, a landmark piece of legislation that guaranteed free, appropriate public education for all children with disabilities. That law was amended in 1986 and again in 1990 with the Individuals with Disabilities Education Act (IDEA). By 1986, only three out of ten deaf children still attended specialized schools like the residential ones that predominated in earlier times. Most of the rest were in public school classrooms of one sort or another, either in separate classes for deaf students or in mainstream classes with interpreters or resource teachers. But they still weren’t succeeding. A 1988 Federal Commission, assigned to investigate deaf education, declared in its report: “The present status of education for persons who are deaf in the United States is unsatisfactory. Unacceptably so. This is [our] primary and inescapable conclusion.”
By then, many students were getting at least some of their education in ASL. Beginning in the 1970s, deaf educators adopted “total communication.” The idea was that deaf students should learn through whatever means possible—spoken English, ASL, fingerspelling, writing, anything that worked. It was a wary compromise that seemed like a good idea in theory, but in practice this either/or approach left students fluent in neither English nor ASL, satisfying no one. The signing of many teachers in such classrooms was often what the deaf call “shouting”—throwing in a sign for a prominent noun or verb here or there. Or it was SimCom, for “simultaneous communication,” which requires teachers to speak and sign at the same time—a difficult trick to pull off because the syntax of ASL and English are not the same. Or, worst of all to proponents of ASL, there was Signed Exact English, which is not ASL at all. On the other hand, from the point of view of parents interested in oral education, some total communication classrooms seemed like very quiet places without enough speech, where good oral role models were in short supply.
For a time, a system called Cued Speech seemed promising. A set of hand signs flashed near the face to indicate phonemes, it was expressly designed to help with literacy but has worked less well with English than with other languages.
A new approach, known as bilingual-bicultural, emerged in the 1990s. In such schools, face-to-face interaction is in ASL; English is taught for the purposes of reading and writing. “Bi-bi” is popular in the Deaf world and Gallaudet has recently declared itself a bilingual-bicultural university. Here, too, success depends at least in part on the quality of the signing. The question is no longer “about whether deaf children can be appropriately educated in sign (at least within educated circles),” wrote Marc Marschark and Peter Hauser, but about “the subtle and not-so-subtle implications of varying degrees of sign language fluency.”
• • •
When I first read Marschark’s 2007 book, Raising and Educating a Deaf Child, it was a vast relief. It seemed remarkably fair and balanced, mostly by being willing to speak bluntly to both sides of the deaf education debate. In the first chapter, Marschark tackled two common beliefs that hung around like persistent storm clouds. “First, there has never been any real evidence that learning to sign interferes with deaf children’s learning to speak,” he wrote. “Second, there is no evidence that deaf children with cochlear implants will find themselves, as some had warned, ‘stuck between two worlds,’ (hearing and deaf) and not fully a member of either.” His message was that deaf children need full exposure to a language—any language—from their earliest years and that parents need to be the ones providing it. “Effective parent-child communication early on is easily the best single predictor of success in virtually all areas of deaf children’s development.”
Marschark, who is hearing, began his career as a cognitive psychologist interested in language and metaphor. As a side project, he got interested in the way deaf kids used figurative language. Intrigued by the results, he retrained himself in child development and the issues of deaf children. Part of that effort involved reading all of the existing research literature, which he compiled into his first book, Psychological Development of Deaf Children, published in 1993.
On the strength of that book, Marschark was recruited by the National Technical Institute for the Deaf. NTID was a daunting prospect for someone who didn’t really know any deaf people or any sign language, but Marschark has been there for two decades and is now proficient in ASL. And yet, he has written, “as a hearing person I can never truly understand what it means to be Deaf or to grow up (deaf or hearing) in the Deaf community. I may be welcomed, and I may know more about deafness than many other people, but I still have to understand Deaf people and the Deaf community from my hearing perspective.”
That was just one of the things we talked about when I spent the afternoon in his office in Rochester. Bearded and bespectacled, Marschark looks like an academic. But he’s an academic on a mission. He speaks regularly around the country. Lately, he’s been giving a lot of presentations that all have the same theme: What We Know, What We Don’t Know, and What We Only Think We Know About ——. Fill in the blank with whatever subject Marschark’s been asked to tackle: cochlear implants, ASL, literacy, school placement, you name it. He’s managed to offend both sides, including his colleagues. “I warn everybody at the outset: Each of you is going to be upset. But I say, Don’t leave! Don’t get too upset, because in five minutes you’ll be happy and the person next to you will be upset.”
This series of talks is based on an epiphany Marschark had after a major research project in 2009. “I discovered that a lot of what we do in deaf education doesn’t have any evidence to support it, or people are cherry-picking,” he said. Since then, he has been on a crusade to force people to question the “religious” convictions they have of what they think they know is true. Take for example that question of whether sign language gets in the way of learning English. People regularly send him e-mails with lists of research citations ostensibly proving him wrong when he says there’s no evidence that sign language interferes. But if you read the studies, he says, “what the research says is that if you want a child to be oral, if you want them to speak, they need to be exposed to spoken language.” That is not the same thing as saying that sign language poses a problem, nor is there any established threshold of how much oral experience is enough. “The data’s not wrong, but the conclusion is drawn from one side of the coin,” he says. Yes, it’s true that children in oral programs tend to speak better than children who are not in oral programs but, Marschark points out, “they’re not randomly assigned.” Children who are likely to do well in oral programs—for instance, those like Alex with some residual hearing—go into those programs. Children who are not likely to do well tend not to go into those programs or to drop out o
f them along the way.
On the other hand, Marschark, who has long been a proponent of ASL for deaf kids, has dared to say lately that much of the argument for the benefits of bilingualism is based on emotion more than evidence. When we met, he was about to publish a paper that “was going to make people mad.” It argued that, so far, no one had shown convincingly that bilingual programs were working. “There is zero published evidence, no matter what anybody tells you, that it makes anybody fluent in either language.”
In the earlier edition of Raising and Educating a Deaf Child, published in 1997, Marschark was circumspect about cochlear implants. In the 2007 version, he supports them. What changed? “There wasn’t any evidence and now there is,” he told me. “There are things I believed three years ago that I don’t believe. It doesn’t mean anybody’s wrong; it just means we’re learning more. That’s what science is about.” One of Marschark’s favorite lines is: “Don’t believe everything you read, even if I wrote it.”
“So,” I ask him, “what do we know?”
What we know, he says, is that it’s all about the brain. “Deaf children are not hearing children who can’t hear; there are subtle cognitive differences between the two groups,” he says, differences that develop based on experience. Deaf and hard-of-hearing babies quickly learn to pay attention to the visual world—the facial movements of their caregivers, their gestures, and the direction of their gaze. “It’s unclear how that visual learning proceeds, but their visual processing skills develop differently than for hearing babies, for whom sights and sounds are connected.”
Marschark and his NTID colleague Peter Hauser have compiled two books on the subject, one called How Deaf Children Learn and a more academic collection, Deaf Cognition. The newly recognized differences affect areas like visual attention, memory, and executive function, the umbrella term for the cognitive control we exert on our own brains. Marschark and Hauser stress these are differences, not deficiencies. Understanding those differences, they argue, just may tell us what we need to do to finally improve the results of deaf education. “We have to consider the interactions between experience, language, and learning,” they write.
Such differences would seem to be obviously true of deaf children with deaf parents—and they are. Neuroscientist Daphne Bavelier worked for years with Helen Neville before heading her own lab at the University of Rochester. Bavelier, together with Peter Hauser, who was a postdoctoral fellow in her lab, has continued Neville’s work on visual attention. Among other things, she has shown that deaf children have a better memory for visuospatial information and that they attend better to the periphery and can shift attention more quickly than hearing students. The ability is probably adaptive, as it helps them notice possible sources of danger, other individuals who seek their attention, and images or events in the environment that lead to incidental learning. This skill may have a downside, however, because it also makes students more distractible.
Bavelier and others study deaf children of deaf parents, since those kids receive little or no sound stimulation and they learn ASL from birth. Such a “pure” population makes for cleaner research as it limits the variables that could affect results, and it could be said to show what’s possible in an optimal sign language environment. But it also limits the relevance of such work. “Deaf of deaf” are a minority within a minority. The other 95 percent of deaf and hard-of-hearing children are more like Alex—born to hearing parents and falling somewhere along a continuum in terms of sound and language exposure, depending on when a hearing loss is identified, what technology (if any) is used, and how their parents choose to communicate. One thing that no one can control is who your parents are.
Children with cochlear implants are another minority within a minority—though they may not be a minority for long. Research into their cognitive differences has barely begun, but what there is sits squarely at the intersection of technology, neuroscience, and education. “No part of the brain, even for sensory systems like vision and hearing, ever functions in isolation without multiple connections and linkages to other parts of the brain and nervous system,” notes David Pisoni, a contributor to Deaf Cognition. “Deaf kids with cochlear implants are a unique population that allows us to study brain plasticity and reorganization after a period of auditory deprivation and language delay.” He believes differences in cognitive processing may offer new explanations for why some children do so much better with implants than others.
To that end, Pisoni and his colleagues designed a series of studies not to show how accurately a child hears or what percentage of sentences he can repeat correctly, but to try to assess the underlying processes he used to get there. In one set of tests, researchers gave kids lists of digits. The ability to repeat a set of numbers in the same order it’s heard—1, 2, 3, 4 or 3, 7, 13, 17, for example—relates to phonological processing ability and what psychologists call rehearsal mechanisms. Doing it backward is thought to reflect executive function skills. Even when they succeeded, which they usually did, children with cochlear implants were three times slower than hearing children in recalling the numbers. Looking for the source of the differences in performance, Pisoni zeroed in on the speed of the verbal process in working memory, essentially the inner voice making notes on the brain’s scratch pad. From these and other studies, he concluded that some brain reorganization has already taken place before implantation and that basic information processing skills account for who does well and who doesn’t with an implant. Those with automated phonological processing and strong cognitive control are more likely to do better.
For his part, Marschark is particularly interested these days in “language comprehension,” a student’s ability to understand what he hears or reads and to recognize when he doesn’t get it. Marschark tested this by asking deaf college students to repeat one-sentence questions to one another from Trivial Pursuit cards. Those who communicated orally understood just under half of the time. Those who signed got only 63 percent right. All were encouraged to ask for clarification if necessary, but they rarely did so. Whether they were overly confident or overly shy, the consequences are the same. These kids are missing one-third to one-half of what is said. You can’t fill in gaps or ask for clarification unless you know the gaps and misunderstandings exist; and you have to be brave enough to speak up if you do know you missed something.
Hauser has been exploring the connection between language fluency and the development of executive function skills. “Executive function is responsible for the coordination of one’s linguistic, cognitive, and social skills,” he says. The fact that many deaf children show delays in age-appropriate language means they may also be delayed in executive function. Too much structure or overprotectiveness—something parents of deaf kids are prone to—compounds the problem by further stifling the development of executive function skills. So far in the study, deaf of deaf are on par with hearing children, “so being deaf itself isn’t causing the delay,” Hauser says. When we met, he was starting to collect data on deaf children with hearing parents.
If researchers can continue to pin down such cognitive differences, says Marschark, they might be able to pass that useful knowledge along to teachers. “Bottle them and teach them in teacher training programs,” he suggests. “Here’s how you can offset the weaknesses, here’s how you can build on the strengths.”
“That would be great,” I acknowledge.
I’d also been struck, however, by how much keeping up with research on deaf education feels like following a breaking news story on the Internet. It leaps in seemingly conflicting directions simultaneously. Furthermore, not all of it feels relevant to Alex.
“Absolutely,” Marschark agrees. “Kids today are not the kids of twenty years ago or ten years ago or even five years ago. Science changes, education changes, and—most important—the kids change. And one of the problems is that we as teachers do not change to keep up with them.”
The one constant, he points out, is that they are all still
deaf, and no one—not deaf of deaf or children with cochlear implants or anyone in between—should be considered immune from these cognitive differences. Those with implants, he says, will miss some information, misunderstand some, and depend on vision to a greater extent than hearing children. (The latter is a good thing because using vision and hearing together has been shown to consistently improve performance.) As Marschark and Hauser asked rhetorically in Deaf Cognition: “Are there any deaf children for whom language is not an issue?”
19
A PARTS LIST OF THE MIND
David Poeppel is pulling books off the groaning shelves in his office at New York University. I’ve come back to see him to talk about language, the other half of his work.
“Open any of my textbooks,” he says, holding one up. “Why is it that any chapter or image says there are two blobs in the brain and that’s supposed to be our neurological understanding of a very complex neurological, cognitive, and perceptual function?”
What he is objecting to is the stubborn persistence of a particular model depicting how language works in the brain. “We’re talking about Broca’s area and Wernicke’s area,” he says. “It was a very, very influential model, one of the most influential ever. But notice … It’s from 1885.” He laughs ruefully. “It’s just embarrassing.”
In the 1800s, the major debate among those who thought about the brain was whether functions were localized in the brain or whether the brain was “essentially one big mush,” as Poeppel puts it. The evidence for the mush theory came from a French physiologist named Jean Pierre Flourens. “He kept slicing off little pieces of chicken brain and the chicken still did chicken-type stuff, so he said it doesn’t matter,” says Poeppel.
Then along came poor Phineas Gage. In 1848, Gage was supervising a crew cutting a railroad bed in Vermont. An explosion caused a tamping iron to burst through his left cheek, into his brain, and out through the top of his skull. Amazingly, Gage survived and became one of the most famous case studies in neuroscience. He was blinded in the left eye, but something else happened, too.