Walls with Ears | Teachers College Columbia University

Skip to content Skip to main navigation

Walls with Ears

TC's newly renovated Neurocognition of Language lab ensures that everyone gets heard
By Joe Levine

In the conversation in Karen Froud’s lab, even lemurs have a voice.

Blue-eyed black Madagascar lemurs, to be precise. Recordings of their raucous burble frequently emanate from the Neurocognition & Language lab on the 11th floor of Thorndike Hall, a fitting reminder that brain research demands a meeting of diverse minds. 

“Neuroscience isn’t something you can or should do by yourself,” says Froud, Associate Professor of Neuroscience & Education and Speech & Language Pathology. “TC is unique in providing the opportunity to collaborate with so many different people, to study such a range of phenomena and behaviors, and especially, to think about it all in the framework of what’s actually going on in society.”  

When Froud renovated her lab last summer, her goal was to create an environment that would better support such work. Partly because money was tight, but also because Froud’s students possess an astonishing array of relevant talents, they did much of the work, tailoring the space for highly specialized uses.

“When you come here, you become a jack of all trades: part speech pathologist, part physicist, part electrical engineer,” says Trey Avery, a Ph.D. student who, as Froud’s lab manager, planned much of the renovation in collaboration with staff from around the College. 

What makes TC's Neurocognition of Language Lab truly special is a diverse group of students, with a wide range of perspectives and experience

Check out TC's Neuroscience Lecture Series.

The reconfigured space, which is twice as big as before, gives Froud room to expand research activities and work towards the launch of a new Ph.D. program in Neuroscience & Education. (Until now, her doctoral students have received degrees from either TC’s Speech Language Pathology program or other departments across the College, such as Human Development or Clinical Psychology.) The lab also has added a second 128-channel electroencephalography (EEG) system – the primary methodology Froud and her team use to pinpoint the brain’s real-time responses to specific stimuli – so that the collection of publishable data can proceed uninterrupted while students train nearby.  And with the construction of additional sound-proofed EEG testing booths, the conversation never has to stop.

“This is our concept of a classroom of the future, combining lab with teaching space,” Froud says.

The model of student apprenticeship that Froud is developing may be unique in a field that tends to be hierarchical and narrowly structured.

“In our lab, I’m surrounded by people from all over the world, with all kinds of expertise that I don’t have – in education, in the clinic, in the policymaking arena,” says Froud, who is British and began her career as a speech pathologist before turning to linguistics and then neuroscience. “They come to me and say, ‘I’m interested in how children learn to read,’ or, ‘How does poverty affect education?’ or ‘How does what we do in a speech/language clinic change the way children are able to talk or to perceive sound?’ – questions that have the potential to inform a clinical intervention for some population in need. And what we’re able to do, I hope, is bring these huge questions down to an actual experimental manipulation.” 

The lemur recordings, for example, are part of the research that Froud and a former student, Reem Khamis-Dakwar, are conducting on a speech disorder called apraxia. The condition, which manifests primarily as a difficulty producing complex sounds, is widely thought to be one of motor coordination. However, linguistic analyses have found that the speech of adult stroke patients with apraxia lacks evidence of co-articulation – the ability to physically shape upcoming sounds even as one is speaking and continuously put them in sequence. As linguists themselves, Froud and Khamis-Dakwar know that co-articulation depends on another ability called under-specification, a perceptive skill employed by the brain to screen out unimportant speech sounds and attend to those that are most relevant.   

“For example, if you grew up speaking Hindi, you recognize subtle shadings in the pronunciation of sounds beginning with the letter p, because there are many variations of p in that language that affect meaning,” says Froud. “But if you grew up speaking English, you don’t notice those differences, because subtle changes in the p sound have no importance. So as we master a language, we naturally begin to underspecify.”

Perhaps children with apraxia don’t underspecify, Froud and Khamis Dakwar reasoned. That would mean that apraxia must be at least partly related to problems in the brain’s sound processing systems or sub-systems, and it would explain why children with the disorder don’t respond to traditional speech therapy approaches that focus on movements of the tongue and palate.  

Funded by the Childhood Apraxia of Speech Association of North America, the two researchers have since brought a steady parade of children to TC to listen to various sounds while electrodes on their scalps record their brains’ responses to contrasts among those sounds. From spikes on a readout graph, Froud and Khamis-Dakwar can tell when a child recognizes differences in speech sounds as distinctive and meaningful rather than unimportant. The work is painstaking, requiring patience with children and tight control of conditions, such as noise and light, to ensure that brain activity recorded during EEG monitoring is in fact triggered by the stimulus being tested, instead of by environmental effect, such as conversations or extraneous noise, light or electrical activity. 

“Children are really unpredictable, and you end up throwing out half your data. So it’s really important to have good sound baffling and lower-intensity lighting,” Froud says. “When you work with really challenging populations, the rule is: ‘Control all the factors you can.’”   

Many researchers steer clear of these kinds of experiments because they’re so difficult to control, Froud says. But the payoff from such work is the possibility of helping to improve lives right now.

“We’ve found so far that kids with apraxia do indeed specify differently than other children, though the logic of when and why is not yet clear,” says Khamis-Dakwar, who now teaches at Adelphi University. “If we’re right, a condition that’s been thought to be very complex could turn out to hinge on something relatively simple. Although that may not make it any simpler to remediate.”

While their work holds much promise for significant breakthroughs, Froud and her colleagues have found that grant-making organizations, particularly amid increased competition for limited funds, are still developing mechanisms to support a multidisciplinary approach. The Institute of Education Studies (IES), the research arm of the U.S. Department of Education, tends to fund efforts that compare the effectiveness of one school-based intervention to another using tests and behavioral measures, while the National Institutes of Health has directed funding towards more basic brain science research rather than the applied educational and clinical areas the Froud lab is exploring.

“You see a lot of studies focused on behaviors and a lot of other studies focused on genes, but there’s a real need to fill the gap in between, and that’s where I think the kind of EEG studies we’re doing can really make a difference,” says Khamis-Dakwar.

“We’re asking not only: ‘What’s better, program A or B?’ but also: ‘Who does each program work for best and why?’ says Chaille Maddox, a post-doctoral student who is exploring different reasoning processes in tasks that do not draw on language.

Froud takes a longer view. “A wise man once told me, ‘If you do research and end up only with answers, you’re doing it wrong. You should be ending up with more questions than answers.’ And so far, by that measure, we’re doing it right.”

Visit the Facebook page of TC's Neurocognition of Language Lab.

Published Wednesday, Mar. 12, 2014