Cutting Through the Noise
“I don’t understand, Dad,” rages the narrator of Raymond Luczak’s poem “Practice” and smashes his hearing aid with the telephone receiver. The poet himself, who lost most of his hearing after being stricken with double pneumonia as an infant, has passed a similar verdict on audio technology, declaring that it renders sound “noisy and meaningless.”
Michael Sagum, a first-year student in TC’s Deaf and Hard of Hearing (D/HH) program, takes a somewhat different view. Born profoundly deaf in both ears, Sagum, as a mainstreamed student in Seattle, relied on hearing aids and FM listening systems. He had auditory and speech therapy from the time that his hearing loss was diagnosed at the age of 10 months.
At 15, Sagum elected to receive a cochlear implant (CI). He still gets emotional when he remembers the moment the implant was activated and he heard birds chirping. Sagum says that hearing his own speech is a “source of pride” but is careful to emphasize that he respects all ways of communication for the deaf. He feels that the years of auditory and speech therapy that he has had are a critical component for the success of the cochlear implant. He’s also chosen to learn American Sign Language (ASL), which he calls “a very important part of my life.”
To hear or not to hear?
As technology improves, that’s increasingly the question for many people who are deaf or hard of hearing, and it speaks to the very nature of identity and cognition.
“Does language map out what you already know, does language dictate thought or does language add to the cognitive map?” Robert Kretschmer, Associate Professor of Education and Psychology, asks students in his first-year course, Language Development and Rehabilitation. “As educators and researchers, we are obligated to ask how children process the world if they do so without the sense of hearing.”
Kretschmer, who assigns readings on the language development theories of linguists such as Benjamin Whorf Lee and psychologists such as Jean Piaget and Lev Vygotsky, raises a host of issues related to the cognition and education of D/HH children. Is signing an absolute equivalent of spoken language? Will a child whose deafness goes undiagnosed past the age of three experience lifelong learning and thinking deficits? With the use of assistive technologies, will a deaf child identify more with the hearing or the signing world? How long does it take to become a fluent signer? What is the impact on the child of the parents’ approach to language?
The deaf and hearing worlds have been debating these questions, in one form or another, since the prototypes for modern assistive technology emerged late in the 19th century.
In one camp, many deaf individuals and organizations that speak for them have described deafness as a unique culture, with its own language and modes of social interaction, rather than as something to be cured. The National Association of the Deaf has yet to fully endorse cochlear implants, a technology introduced in the 1970s. The Association says CIs are not appropriate for all deaf and hard of hearing children and adults and not a cure for deafness, and that success stories with implants should not be overgeneralized. In the other camp, most of the hearing world, as well as some schools for the deaf, did not approve ASL as a legitimate language until the mid-1990s. Under the federal Individuals with Disabilities Education Act, deaf children – like all children with disabilities, and not unlike their typically developing counterparts – are now accorded a “free and appropriate education” at schools in their own neighborhoods. And, according to a directive that proponents of deaf culture view as being comparable to policy regarding English Language Learners, public schools must “consider the communications needs” of D/HH children and provide “opportunities for direct instruction in the child’s language and communication mode.”
But which mode?
Hearing aids, which evolved in part from the work of Alexander Graham Bell and were first marketed in behind-the-ear form in the 1950s, amplify sound but do not separate out speech from ambient noise. Nor can they adequately amplify high pitches, particularly high female voices.
Personal FM systems have proved somewhat more effective, especially in the classroom. The teacher speaks into a microphone, usually a lavalier. The student receives the signal, via radio waves, through an FM receiver that may or may not include a hearing aid interface. FM systems improve the speech-to-noise ratio, an assigned number value that boosts the voice speaking into the microphone over background noise such as scraping chairs, air conditioners and students. The focus on the microphone wearer’s voice can be further heightened by a cochlear implant. But there are drawbacks: The FM signal quality may be hampered by unknown interference, and the system may be too complex for young children to use without help. And then there are cochlear implants themselves, which have stirred the greatest hopes – and controversies. Unlike hearing aids, which rely on inner ear hair cells to convert vibrations into nerve signals, cochlear implants transmit directly to the auditory nerve, which then sends information to the brain. Some implant recipients, such as TC’s Michael Sagum, praise the device for enabling them to learn to speak and interact with hearing people. Others who are deaf or hard of hearing argue that, at best, implants, can only approximate an ability they will never fully have.
“What’s the point of using a CI if it does not do anything for me except make me aware of environmental noises?” says Russell (“Rusty”) Rosen, a lecturer in the D/HH program at TC who is deaf. “Hearing is not the only means of obtaining information and communicating with people.”
Graduates of TC’s D/HH program must accommodate to all these views when they work in schools and other settings.
“The children I work with are cochlear implant and hearing aid users, and most rely on FM systems in school,” says Dana Selznick (M.A., M.Ed ’10), who is a hearing education teacher for the New York City Department of Education. “What you learn right away is that you have to integrate the technology based on knowing the child’s unique needs. Each child responds to new assistive technology differently, which is why it is so important to understand the learner as a whole. For example, teachers have to train the kids to understand the difference between voice qualities when using the FM system.”
Dale Atkins (M.A. ’72), a TV and radio personality who graduated from the College’s deaf education program and has worked extensively with D/HH children, their families and professionals in the United States and abroad, says that assistive technologies have not changed the basic equation facing children with hearing issues. “The cognitive and learning issues that existed before the era of cochlear implants haven’t really disappeared,” Atkins says. “Certainly, the earlier a child is implanted, the earlier he’ll have access to language and good speech patterns, and the better off he’ll be. The problem, though, is that hearing people have very high expectations for the cochlear implants and tend to place too much faith in technology. I worry about the kids whose family members, teachers, coaches and friends assume that the kids are doing very well when, in fact, deaf children miss some pretty significant elements of classroom instruction and after-school life.”
Atkins says that in mainstream classrooms, “most kids who are deaf or hard of hearing sit in front of the class so that they can see the board and teacher. But what happens when a student behind them says something? The deaf child may miss all or part of what was said, whether she has a cochlear implant, a hearing aid or use of an FM system.” Atkins says she has addressed this issue by asking students to repeat what they said – a benefit for all concerned. But in a fast-moving classroom, lunchroom or social situation, such instant replay isn’t always possible.
Will technology ever elevate deaf or hard-of-hearing students to the status of equal players in a hearing world?
Rusty Rosen is skeptical.
“Every generation has a ‘true believer’ faith in a particular technology,” he says. “In the late 1960s, when I was a student at the Lexington School for the Deaf in Queens, people thought hearing aids were a cure-all. Chairs were arranged in a semicircle. The students wore headphones and the teacher spoke into a microphone. But what I heard through my hearing aids were mechanical sounds, not organic human voices. So, even though the technology was focused on auditory intake, the classroom was really only set up for visual learning.” More recently, Rosen recalls a school board member – a physician, no less – who rejected tenure for an ASL teacher on the grounds that cochlear implants had made deafness obsolete. “We’re just not there,” he says.
Atkins believes technology plays a vital role, particularly for those who have some hearing. “Signing is an important part of deaf communication for a large segment of the D/HH population, but it doesn’t do anything for people who are basically auditory learners.” Still, she says, the ultimate role of deaf-education specialists and other experts is to help the deaf and hard of hearing cut through the noise around the technology-versus-signing debate and find the best individual solutions for themselves and their families.
“People are starting to understand that the conversation about deafness can’t exclusively be about technology,” Atkins says. “It’s about celebrating children for the unique and precious people they are. Realizing that is a human advance, not a technological one.”