Virtual Reality (VR), renowned for its immersive and interactive capabilities, forms a perfect union with Artificial Intelligence (AI). Together, the pair generates learning environments that are highly personalized, contextually authentic, and dynamically experiential, essentially needed for the learning of a new language and soft skills like communication, public speaking, and presentation. 

The "VR meets AI: Unleashing Potentials in Language Acquisition and Communication Skills Development,” a demo station hosted by Digital Futures Institute (DFI) as part of the Artificial Intelligence Research in Applied Linguistics (AIRiAL) Conference, offered scholars and educators a glimpse into the exhilarating possibilities that lie at the intersection of VR and AI in the realm of applied linguistics.

Participants try out Bodyswaps at the demo station

The station attracted a keen group of participants including not only TC students and faculty, but also educators and scholars across the country, eager to immerse themselves in the exciting virtual world enabled by VR and AI. Recognizing the varying comfort levels with VR among participants, the station provided two modes of participation to accommodate. Those comfortable with wearing VR headsets could engage in a fully immersive experience. Alternatively for those who preferred not to wear the headsets, whether for personal reasons or due to medical conditions, a live cast option that allowed participants to observe what was happening in VR on a big screen was also available.

The station exhibited the educational potentials of applications like Bodyswaps and Mondly in transforming language learning and communication training. Bodyswaps, inspired by a body of research around embodiment in VR, creates a safe environment where learners can practice soft skills in high-pressure workplace scenarios simulated with AI-powered virtual humans and make mistakes without real-world repercussions​​. The platform facilitates realistic scenario-based learning experiences encompassing a range of modules including communication, public speaking, leadership, and diversity and inclusion. Utilizing the advanced tracking capabilities of VR headsets to monitor not just verbal communication, but also non-verbal cues such as gestures and body language, the application analyzes learners' behavior within the VR environment and provides personalized feedback helping them to understand how their physical presence and demeanor can impact their ability to communicate effectively. 

Participants try out Mondly at the demo station

Jane Zhuang, a conference participant noted, “It’s kind of like ‘The Sims’ – you have to be engaged. I loved the idea of showing people different options in situations because we can see the picture- two people talking to each other, their facial expressions, actions, gestures. I liked that part because it really requires people to think.” 

Minh Le, DFI’s Principal Instructional Designer and the demo session lead, highlighted Bodyswaps’ notable feature, "I was impressed with the capability to literally swap my body with the person I was talking to in the virtual world, allowing me to see and feel how my conversation and interactions were perceived from the listener’s perspective. This unique feature helped me to understand the impact of my words and delivery, providing valuable insights to reflect on and improve my communication skills.” 

Minh Le guides the conference's participants through the VR experience

Equally impressive was Mondly, a VR application that transcends geographical boundaries to offer language learning experiences in virtual worlds. The application offers a diverse range of language lessons, covering 41 languages that can be learned from 33 native languages. Notably, by leveraging the power of AI speech recognition technology, Mondly provides learners with real-time pronunciation feedback and the confidence to utilize new languages in real-world scenarios. Wandee Tawinboon, a conference participant shared, “I think if the students get to try the VR with the immersive environment, it will help them learn the language more effectively. The motivation and the attitude towards language learning is also very important. I think VR has potential to boost and improve that aspect, as well.”

Participants eagerly try out Mondly and Bodyswaps at the demo station

While the unprecedented capabilities of applications like Bodyswaps and Mondly left many impressed, it is crucial to approach this emerging technology with a critical eye, ensuring we do not overlook flaws that could potentially impede learning. Participants highlighted areas needing improvement, particularly within the AI feedback system. 

Tawinboon pointed out, “I tried the language learning application [Mondly] and tested its pronunciation feedback system. When I intentionally mispronounced words, the system inaccurately marked them as correct. The feedback's accuracy needs improvement.” 

Zehao Li, another conference participant, stressed the importance of detailed guidance to correct and understand language nuances, “[Mondly] just gave me a sample of the pronunciation, but sometimes the full sentence is quite difficult for me, a beginner, to pronounce. Even though I pronounced wrongly, [Mondly] just showed me the result without any instruction.”

The positive perceptions and interest towards VR generated at the demo station echoed ongoing research on VR's application in the domain of applied linguistics. While VR's immersive virtual environment was evidently beneficial for language learning, specifically verbal memory enhancement, contextual vocabulary learning, and cultural understanding, its effectiveness is still a subject of ongoing research and there are many challenges to its widespread adoption in educational settings. As we navigate through the evolving landscape of education driven by emerging technologies, the positive attitude and enthusiasm around VR highlighted a promising path forward, promising to transform and elevate linguistics education to new, unprecedented heights.


The AIRiAL Conference is an annual event that brings together scholars, teachers, and tech enthusiasts. Each year, it focuses on exploring the synergies between Applied Linguistics and Artificial Intelligence. The first annual AIRiAL conference was hosted at Teachers College on September 29th and 30th.  The theme of this first conference was "The Future of Artificial Intelligence in Applied Linguistics," amplifying the importance of integrating advanced tech into language research and education. The purpose of the conference was to bring researchers from various areas of Applied Linguistics together in one location to discuss our research, share our ideas, and meet one another both formally and informally.

This year the program consisted of paper presentations, poster presentations, demonstrations, and panel discussions on topics in applied linguistics including language testing, policy in higher education, EdTech applications, text-to-image/video generation, and linguistic analyses. Presenters and attendees came from 7 countries and 14 US states. The demonstration stations included a discussion by Minne Atairu of how to craft text prompts to create images and videos and a Virtual Reality station run by Minh Le demonstrating the potential for language learning in an immersive world. The three plenary speakers, Alina von Davier from Duolingo, Kadriye Ercikan from Educational Testing Service (ETS), and Evelina Galaczi from Cambridge University Press and Assessment (CUPA), representing large language testing companies offered diverse perspectives on current benefits and challenges of implementing artificial intelligence in language testing and and language teaching. The conference will continue next year addressing innovative ideas and fundamental issues in Artificial Intelligence Research in Applied Linguistics.