ChatGPT, a newly popular artificial intelligence program, may help teachers plan lessons and grade papers. But it can also help students cheat and introduces numerous other challenges, leading districts across the country — including the New York City Department of Education — to ban the software altogether.

“These are hard decisions schools need to make, but they should not be made out of fear,” said TC’s Lalitha Vasudevan, Vice Dean for Digital Innovation and head of the Digital Futures Institute, in an interview with the Washington Post. “They should be made within the scope of improving student learning.”

We followed up with Vasudevan and other education technology experts from throughout the TC community to weigh in.

Is ChatGPT Untenable for Schools?

For some, perhaps yes — but the complex advancements of ChatGPT may just provoke new, but needed questions and answers about the future of teaching.

“The danger is that [educators] start relying on those tools before they have the in-depth knowledge about teaching and classroom management and lesson planning design,” TC’s Paulo Blikstein, Associate Professor of Communications, Media and Learning Technologies Design, told Education Week. “The danger there is that the technology will drive the teaching and not the other way around.”

But education has often been wary of technological advancements in the past, notes Vasudevan of the graphing calculator — which has become foundational to high school math classes across the world. Educators may not need to work against ChatGPT, but rather learn how to minimize the risks it introduces and welcome the opportunity to be critical of its use in education.

“Exploring the question of what it means to be a good collaborator with machines reintroduces the idea of what it means to teach, learn, the relationships we have, as well as where and how we focus our efforts,” says TC’s Jin Kuwata, who also teaches in CMLTD. “I see these kinds of tools as not just things that complete tasks, but entities we learn to work with in partnership to enrich our understandings of the world.”

But What About Cheating Concerns?

OpenAI, the program’s parent company, is already working on accompanying software that can help educators detect text that was generated by ChatGPT.

That plus building trust between students and teachers could be key. For Blikstein, the unspoken social contract between an educator and student is built upon mutual “human” investment in working together. AI tools complicate that.

“If we start using AI for both sides — student and teacher — these kinds of ethical contracts need to be reconsidered and made transparent to both parties,” explained Blikstein. “It becomes this weird learning environment [where] people don’t trust each other anymore.”

Additional tools and trust for the future, sure. But for right now, can school districts rely on bans to limit academic dishonesty?

Many districts have simply blocked the platform from school-issued devices and networks, but — as with any banned entity — demand and usage likely persists, especially if students can still access the platform from other internet networks and devices.

“The ban is a realistic policy reaction in the short-term, when we are uncertain about the tool’s value and risks, but it is not a longer-term solution,” says TC’s Renzhe Yu, Assistant Professor of Learning Analytics/Educational Data Mining, who is encouraging his students to try ChatGPT themselves to explore potential uses.

“It’s researchers who are responsible at this point to work with teachers to understand the best way to incorporate these AI tools to leverage its power, to better help students in mastery of knowledge, AI literacy and other kinds of things.”

What Other Implications Does AI Pose for Education?

Several, to be short. A few top concerns include:

  • AI automating more professions, and altering career prep in schools - “What does the future of coding and computer science education become when generative AI like ChatGPT can write and execute code?” says TC’s Ezekiel Dixon-Román, Professor of Critical Race, Media, and Educational Studies and Director of the Gordon Institute for Urban and Minority Education. “The social and educational implications are significant. These forms of technological affordances will not only have an impact on the processes of teaching and learning but also on schooling’s reproduction of the social by way of who’s trained to use in contrast to who’s trained to design the systems, ultimately framing the discourse of knowledge.”
  • AI upholding and promoting racist structures - Racial biases and colonial structures are upheld by algorithms across the internet, including in AI programs, explains Dixon-Román. “We need to all take a step back and think about what such mediums can do,” says Dixon-Román, noting that AI programs like ChatGPT gather and process information from across the internet with weak and limited safeguards. “Not only is ChatGPT limited to internet data prior to 2021, it inherits much of the existing colonial logics of racial hierarchies that permeate the internet and social media in insidious and explicit ways. This is not so much by way of identity and representation, but more so in its inability to engage in reflection and critique of given data, leaving it open to racialized framings, characterizations, and interpretations.”
  • AI promoting misinformation - “What [generative AI tools] produce as a response doesn’t guarantee anything about whether the information is true or not,” explains Yu. “And because [these programs] learn a lot about how people express themselves, most of their output will follow how human beings communicate in a very confident tone, even if it is wrong.” This phenomenon, Yu says, further highlights the long-held calls for students to develop media literacy to help them navigate misinformation throughout the internet.

What Happens Next?

“We cannot put the genie back in the bottle. ChatGPT is an opportunity to engage more of our teachers, more of our educators in understanding the power and potential of technology,” says TC’s Ellen B. Meier, Professor of Computing and Educational Practice. “Tools such as ChatGPT can present teachers with new pedagogical opportunities to move away from transmission teaching and begin to design more active, culturally relevant learning environments. They challenge us to think creatively about the very process of education.”

Thus, changing how students are taught — even if it's through programs as innovative as AI — is just another chapter in the rigorous journey of improving education.

“If the things that we used to put so much effort into in teaching can be automated, then maybe we should rethink what the actual goals and experiences are that we should work toward in the classroom,” explains Vasudevan.

So what does that process look like? “We have a real chance to say ‘Let’s center student and teacher voices and experiences about their concerns and their interests, and that’s how TC can play a role,” says Vasudevan, who is focused on that work with her colleagues at TC’s Digital Futures Institute — which just released further insights from TC faculty on ChatGPT and will soon create “open spaces for discussion” to demystify and engage with AI tools.

“We’re still in the relatively early days of this consumer-accessible AI, but our everyday lives have been saturated with this kind of AI for a while…Now, we have an opportunity to upend that a bit to support teachers to be inquirers the same way they would inquire a text for a subject,” says Vasudevan. “The key question now is how can we improve the context in which people are brought in — so students and teachers are partners in the decision-making about how to use these tools rather than just users.”