This article is the second in a series about the risks and rewards of generative AI, incorporating perspectives from faculty affiliated with the Digital Futures Institute (DFI) and Edmund W. Gordon Institute for Urban and Minority Education. We’re continuing the conversation with Ioana Literat, Associate Professor of Communication, Media and Learning Technologies Design

Ioana Literat

(Photo: TC Archives)

TC’s Ioana Literat has complicated feelings around artificial intelligence (AI). More than anything, she is an optimist who sees the positives that generative AI can bring to society — even in just pushing educators to think deeply about their goals and what skills they want to cultivate in their students. However, as a leading scholar exploring the growth and evolution of political expression on social media platforms like TikTok, Literat has some mounting concerns as the 2024 U.S. presidential election looms closer, particularly with the way that visual misinformation produced with AI can upset societies and influence political choices. 

[For more of Literat’s perspective on generative AI and media literacy, consider her working paper written for DFI.] 

Here’s what Literat had to say…

On the Usefulness of Generative AI for Educators

Since the emergence of ChatGPT in late 2022, educators have been contemplating the role that generative AI should or shouldn’t play in the classroom. Literat feels that openness is the best path forward not only because of AI’s permanence, but also because it can push instructors to evaluate their approach, as she did with her own courses.

  • “Teachers will have to [embrace AI] because these are necessary skills in today's world, in tomorrow's world. The attitude should not be one of shunning it.”
  • For Literat, this kind of reflection is critical to ensure her own curriculum aligns with the challenges and opportunities presented by generative AI. “ It was a welcome impetus…It made me think about what my goals are for each part of the course, about the kind of skills I want to cultivate in the students, and the kind of assignments that work best in order to reach those goals.”

On the Unique Risks of Visual Misinformation

Visual misinformation, which consists mainly of AI-generated images and deepfakes (videos that use deep learning AI tools to manipulate a person’s likeness to appear as our sound like someone else), is particularly tricky to combat because of the belief that visual media is more trustworthy than something written. 

  • “I'm especially concerned about visual misinformation because of Americans' relationship with news at this moment. We know that trust in mainstream news is really low. Trust in journalists is low. We also know, and I've found this in my research on TikTok as well, that people on both sides of the political divide find that news, and especially visual footage, shared on social media is more authentic.
    “They frame TikTok, and social media more largely, as a key site to get your news. So you could imagine that if a deep-fake or AI-generated political content that is meant to scandalize goes viral on social media within this context, that could have serious consequences.”
  • “I am also concerned about [how misinformation will influence the 2024 election] because the use of evidence…and critical thinking has been politicized. It’s easy to find evidence that [something] is AI-generated, but it’s hard to use the evidence to convince others in a highly politicized and polarized context…The hyper-politicization and polarization happening around the world makes it difficult to effectively debunk misinformation because of the tendency to dismiss the debunking as a political attack.” 

Learn from Teachers College students and faculty about how "artificial intelligence" tools have intersected with their teaching and research in the classroom and beyond in this video series from the Digital Futures Institute. 

On Social Media Regulations and Institutional Responsibilities

Considering the feverish coverage about the risks AI poses for creative industries and the looming deluge of misinformation, it’s critical to establish robust and flexible policies around the technology.

  • “I worry that [generative AI] is going to be a technological advancement that will be weaponized along political lines so social media platforms and technology companies will need to figure out their policies around this and they're going to have to be fluid, dynamic, evolving policies. They will have to think about how to signal AI-generated content, how to design content moderation and reporting procedures around AI-generated content”
  • “When I talk about institutional responsibility, it's not just the tech companies, but also public institutions [and governments]. Europe, for instance, has made impressive headway when it comes to AI policy whereas the U.S. is kind of taking a wait-and-see approach. I'm a little concerned that the instinct will be this moral panic approach where the instinct is to shut down, to ban, to silence rather than engage with [AI tools] 

Even Sam Altman, the CEO of OpenAI which is the parent company of ChatGPT, has urged U.S. lawmakers to regulate AI technologies. But that’s far easier said than done especially when considering the lack of regulations for social media in general.

  • “I'm expecting to see a lot more discourse and action when it comes to official policies around AI. We know that protections, when it comes to social media data and social media regulation in general, are very unimpressive here compared to other places like the E.U. I’m hoping this will be the beginning of a larger conversation.”