This article is a part of a series about the risks and rewards of generative AI, incorporating perspectives from faculty affiliated with the Digital Futures Institute (DFI) and Edmund W. Gordon Institute for Urban and Minority Education. We’re starting things off with Lalitha Vasudevan, Professor of Technology and Education, Director of the Media and Social Change Lab, and Vice Dean for Digital Innovation and Managing Director of DFI.
At the beginning of this year, New York City’s Department of Education announced a ban on ChatGPT because of concerns about students using the chatbot to, essentially, cheat. Then, not even six months later, the ban was reversed. The landscape around generative AI is always in flux, in an almost unprecedented way, but one thing is clear. Generative AI is here to stay, whether we like it or not, and teachers will undoubtedly encounter it in the classroom.
Vasudevan, while acknowledging the risks entailed in generative AI, sees this as an opportunity for teachers to give their students the ability to wade through the muck of misinformation.
How Did We Get Here and What Can Teachers Do About It?
The current information landscape, where the risk of mis- and disinformation seems to lurk around every corner, has been in the making for many years but the prevalence of misinformation boomed during the 2016 election cycle. Fake news certainly played a role, prompting a robust mistrust of both traditional media and social media, but Vasudevan also points to a 2017 deepfake (videos that use deep learning AI tools to manipulate a person’s likeness to appear as someone else) of then President Barack Obama. The video was an experiment by University of Washington researchers, but generally, the rise of deepfakes threatened to permanently alter people’s relationships to media.
Over time, the technology advanced further and at this point “there is nothing that's taken for granted. There is no stable ground because not only can everything be questioned, everything can be fabricated and made up,” says Vasudevan. “One of the things that schools of education like Teachers College can do is really try to raise a critical eye to the intersection of production and consumption of media.”
A key part of developing that critical eye revolves around fostering media literacy in young people. But to truly encourage media literacy that is useful, new technologies need to be allowed into schools. “Where are the opportunities for young people who are in K-12 to not only be consumers of media, of texts, of apps, but to also be critical producers?” asks Vasudevan. “School still plays a role in creating opportunities for that kind of critical dialogue…with other people.” Keeping that in consideration, institutions like TC should focus on empowering educators to include developing technologies in their classrooms and become “co-investigators” with their students.
What it Means to Co-Investigate Generative AI
“That's what we've tried to do at DFI; create lots of play-based opportunities for people to play, engage, tinker,” says Vasudevan. Part of that support comes from the ongoing Demystifying AI in Education seminars, another comes from a series of videos about the risks and rewards of generative AI, but Vasudevan also stresses the importance of teachers experimenting alongside their students in the classrooms.
“...Be co-investigators, try some experiments, and then be open to the fact that the things you think we've figured out will continue to change,” she says.
By experimenting in the classroom — whether by giving assignments that use ChatGPT or by comparing outputs in realtime in class — instructors can build a strong foundation with and for students and give them opportunities to cultivate the skills to critically engage with the ever-growing mass of information available online.
The fear that students will use generative AI to cut corners in their schoolwork will always be there, and there will certainly be students who do just that. However, in realizing this truth, teachers should turn away from punishing students or outright banning the tools. Vasudevan instead encourages teachers to ask “what do you gain and lose? What do you gain and lose as an artist if you have Midjourney [an AI program] produce your images? What do you gain and lose as a high school English student if you're having the sections of your paper drafted by Bing's AI chat?”
Because of how quickly the landscape has changed and will continue to change, for Vasudevan the emergence of generative AI is “forcing the structure of schooling to really take a look at itself.”
Generative AI produces new material based on existing work, meaning there is a significant risk of reproducing harmful and biased rhetoric so it is critical for TC to “bring educators further into the ‘guts’ of AI by testing, questioning,” says Vasudevan. In this sense, experimentation is not an endorsement of the technologies but rather a pedagogical stance that “can serve as a foundation on which teachers can not merely adopt new tools but consider how to engage new technologies to expand their capacities as educators.”
When thinking about the role that institutions like TC play in this evolving field she says, “I hope that as schools of education, as educators, as researchers of education, we are able to support this moment of sea change. We can either go kicking and screaming, or we can engage this [technological advancement] critically.” By engaging critically with these game-changing technologies, teachers can prepare their students to ask deep questions about the texts, materials, and experiences they encounter that are driven by generative AI.
As far as the role of Teachers College in all this, Vasudevan knows TC is in a unique position to help: “We are in the ‘business’ of preparation — preparing to ask questions and engage in inquiry, preparing to teach. We're in the business of support. But we're also in the business of inviting imagination to stay active so that we can meet the needs that are there and help guide the path towards the [unaddressed] needs.”