As the world approaches a critical inflection point for artificial intelligence’s integration into everyday life, experts must fully interrogate the “racial hauntings” embedded in the technology itself. Such was the focus of this year’s Annual Sachs Lecture — hosted in conjunction with the 9th Annual Lecture from the Edmund W. Gordon Institute for Urban and Minority Education — and delivered by TC’s Ezekiel Dixon-Román.

“Computational methods have the potential to radically change public policy, human and educational services and city governance,” explained Dixon-Román, Professor of Critical Race, Media, and Educational Studies and Director of the Gordon Institute. “Yet, cybernetic systems do have sociopolitical implications. Although these automated systems of technology are purported to be more efficient, precise, and objective than the human, it has become now widely known that the technologies are masking the reproduction of inequalities and social histories of sociopolitical violence; further indicating that maybe the human bar is not enough”

Find key takeaways from Dixon-Román’s lecture — a preview of his forthcoming book, Haunting Algorithms — and watch the discussion below.

Representation is not enough to create ethically fair and politically innocuous artificial intelligence.

Captivating an audience in Milbank Chapel and online, Dixon-Román suggests that the development of structures — AI or otherwise — inherently encompasses the “instrumentalizing of power” and the endurance of racial logics.

“The question I'm wrestling with is if all we need are better representations, then I must ask, why does racial violence and subjugation persist, such as [in the case of] Tyree Nichols?” asked Dixon-Román a month after the death of the 29-year-old Black man at the hands of Black police officers.

“If representation and training data, data scientists or designers is what's needed, then why does algorithmic bias continue even when these forms of representation are met?”

AI tools and their mathematics are built on racial logics from colonialism and the Enlightenment.

Dixon-Román suggests that post-enlightenment ideals such as universal reason and hierarchies of humanity have informed the desires of the creators of technology to reproduce their interiority in the exteriority of the machine.

“Through cybernetics, man sought to instrumentalize interiority in the development of self-regulating, self-generating recursive systems of artificial intelligence,” explained Dixon-Román, who joined TC in January from the University of Pennsylvania. “Each phase of instrumentalizing interiority in exteriority was not simply with the desire of objective and impartial observations for reason, but with the interest of efficiency, reliability, speed, and maximizing of capital accumulation.”

It is precisely these principles that fuel fallacies in algorithmic governance that can promote colonial ideals, Dixon-Román outlines.

 Associate Professor Limarys Caraballo and Tisch Lecturer Ezekiel Dixon-Román sit down for a Q&A session with the audience. 

Measurement and quantification can be reimagined to build better artificial intelligence tools.

Dixon-Román and TC’s Lalitha Vasudevan, Vice Dean for Digital Innovation and Director of the Digital Futures Institute, will explore alternative possibilities for AI in education together. The two plan to develop “a [Gordon Institute & DFI] collaboration focused on technology, ethics and justice — a necessary program of research that Teachers College is already speaking to and well-situated to lead the scholarly discourse toward the shaping of alternative and justice oriented technosocial futures.”

Watch Dixon-Román’s full remarks above.