This article is the last in a series about the risks and rewards of generative AI, incorporating perspectives from faculty affiliated with the Digital Futures Institute (DFI) and Edmund W. Gordon Institute for Urban and Minority Education. We’re finishing off with Ezekiel Dixon-Román, Professor of Critical Race, Media, and Educational Studies and Director of the Edmund W. Gordon Institute for Urban and Minority Education.

In the 14 months since the explosive launch of ChatGPT, the future of generative artificial intelligence is still murky. It has the potential to be a powerful tool for teachers but misinformation is a looming threat, especially without comprehensive oversight policies. For TC’s Ezekiel Dixon-Román, however, the more interesting avenue of exploration is how generative AI is connected to technological systems of oppression throughout history and how the evolving advancement can be a tool for liberation. 

As Director of the Gordon Institute, Dixon-Román continues the Institute’s work of improving educational outcomes for marginalized populations. His independent scholarship explores how  “sociotechnical systems of quantification” promote racialization and modes of othering, such as how the eugenics movement used measurement and statistics to justify the subjugation of millions, if not billions, of people. 

Ezekiel Dixon-Román, Professor of Critical Race, Media, and Educational Studies and Director of the Edmund W. Gordon Institute for Urban and Minority Education. (Photo: Bruce Gilbert) 

Dixon-Román spoke with TC about the colonial roots of AI and how, by embracing the alienness of machinery, we can use technology to uplift the marginalized.

Using Generative AI Could Change How We Construct Meaning

Generative AI tools, and large language models (LLMs) more broadly, have the potential to upend the way we understand texts because of how the technology works. Dixon-Román argues that “what makes something meaningful — and what gives something knowledge or understanding — has every bit to do with the reference, the intentions.” Considering the lack of intention in AI-produced content (because the tools are non-conscious and therefore incapable of intent), we are stepping into a world where users are “backward forming” meaning and a message onto the outputs.

Dixon-Román sees a future where instead of knowledge being a valued commodity, the source of knowledge will be highly valued. The emphasis on sourcing is already at work due to the preponderance of fake news since 2016, but becomes even more crucial when there is little to no information on what data was used to train a LLM or how it was labeled. 

“We all should wonder about what was the framework of social, cultural, ethical, and political norms and concerns that was used in the labeling process, because what might be violent to me may not be violent to you and vice versa,” says Dixon-Román.

(Photo: iStock)

Why Bias Is Inevitable in Generative AI

Because generative AI tools like ChatGPT or Midjourney are trained on massive corpuses of material that is absorbed in the large language model wholesale, they can easily reproduce harmful rhetoric or reinforce bias and a lack of regulation makes oversight challenging. However, for scholars like Dixon-Román, the bias of generative AI is embedded in its very foundational technologies.

“Part of what one can begin to trace and unpack is how the very mathematics of the models themselves [are grounded in a colonial logic],” says Dixon-Román. That is, the core technology is intertwined with systems of oppression. As just one example, statistical concepts like standard deviation and correlation coefficient were invented by the same thinker who coined and originated the eugenics movement and used statistics and psychometrics to bolster race-science

This long history of bigotry in the [quantitative applications in the] social sciences, which are assumed to be immune from human bias, creates a key concern for Dixon-Román when it comes to generative AI: “What do these models then do with that which becomes the deviant? How does it handle, manage, or discipline them?” he asks.

How Biased Technologies Cause Unintentional Harm

This concern is arguably best encapsulated in how generative AI handles, or fails to handle, inputs from non-English speakers. Despite claiming to support more than 90 languages, 93 percent of ChatGPT’s training data in 2020 was in English. This means that non-English speaking users of AI tools often are unable to use them properly, instead getting poorly translated outputs or receiving responses in English.

Beyond a subpar user experience, this English language bias can produce inequities. Dixon-Román calls attention to an AI-powered learning analytics platform used in schools across the U.S. that, due to the anglo-centrism of the technology, does not function well for English language learners, if it functions at all. The language bias therefore produces a “techno-social system in classrooms...that is producing inequities as a result,” explains Dixon-Román. The technology’s English-bias is just one of several limitations of the platform, which ultimately reinforces a very narrow style of writing, while punishing more creative styles. 

As such, Dixon-Román suggests that this technology, which is marketed as in line with state policies of assessment standards, may violate the Every Student Succeeds Act, a federal policy enacted in 2015 that, among other things, ensures equitable education and assessment for English language learners.

(Photo: iStock)

How AI’s Bias Creeps In

While the limitations of artificial intelligence are clear to experts like Dixon-Román, another key component of the user experience plays a critical role in proliferating bias. When we argue with Siri, when Joaquin Phoenix falls in love with a virtual assistant in Her, or even when we feel lost without our phones, the line between person and machine blurs. This kind of personification is an innately human response, and studies suggest that anthropomorphizing objects can alleviate loneliness. Human beings also tend to recognize faces and ascribe personality to objects; think of how common it is to treat a Roomba robot vacuum like a pet or the amount of empathy “Wall-E” elicits from viewers with its cute robot protagonist.

Despite the overall harmlessness of personification, when it comes to AI that instinct can prime users to absorb misleading or inaccurate responses, especially if the tool is emulating a real person. According to Dixon-Román, when a non-conscious, unfeeling tool is given a name, the process builds trust and makes it easier for users to fall prey to misinformation. 

When asking an AI tool to respond like a celebrity or to answer a question from the perspective of a historical figure, users have to trust that the response comes from a reliable source — which is an exercise in faith because of the opacity around training data — that it is relevant to the topic, and that the answer is in line with the perspective of the person being simulated. Dixon-Román draws comparisons to data visualization, noting how framing devices “are doing performative work…in order to produce a particular form of narrative.” 

In humanizing AI, users may lose sight of the truth: that the tool is non-conscious, without intent, and incapable of knowing or understanding.

Embed from Getty Images

How Can We Use Technology as a Tool of Resistance?

Both in his own work and with his students, Dixon-Román encourages people to think deeply and expansively about what technology is. At its core, technology is a tool or series of practices that solve a problem, and it can encompass far more than hardware and software. Through expanding how we conceptualize technology, it’s apparent that “there's not just discovery and brilliance, but also the processes of resistance, of subversion [in the development of technology],” says Dixon-Román.

Harriet Tubman’s work processes to liberate herself from slavery, mapping a path to safety, utilizing the Underground Railroad and then expanding that process to liberate dozens of enslaved people over 13 trips without losing a single person is itself a form of technology for Dixon-Román. In accepting that, we can then imagine how technology “can do political subversive resistance work or even affirming work,” according to Dixon-Román.

The key is to recognize that “technology is an instrument we need not place a human behind,” says Dixon-Román. “The minute we place a human face on it, we’re already arguably doing colonial work.”