Research

This section describes my academic research with virtual agents. Academic publications are available here. My work in industry related to applications in VR, AR and MR for training, simulation, logistics, support, industry 4.0, and manufacturing is available here.

I work on building engaging interactions between humans and virtual agents that build rapport over time. Embodied Conversational Agents (ECAs) are a form of human-computer interaction represented by intelligent agents that live in a virtual environment. Graphically they can take any form, often human-like, and aim to unite gesture, facial expression, dialog models, and speech to enable face-to-face communication between users and computer-generated characters. My research aims to improve naturalness and effectiveness of human-agent communication by creating rapport (the feeling of mutual understanding, or the sensation of being “in sync”) between them.

Research Projects

Boston (History Education) 2016 – 2017

The Boston Massacre History Experience is a multimodal interactive virtual-reality application aimed at eighth-grade students, directed by my colleague Laura Rodriguez. I co-directed this project in its early stages and later transitioned to a technical adviser. In the application, students students have conversations with seven embodied conversational agents, representing characters such as John Adams, a tea-shop owner, and a redcoat. The experience ends with a conversation with Abigail Adams for students to explain what they learned and a narration by John Adams about events that followed. The application is novel technically because it provides a form of user initiative in conversation and features an unprecedented number of virtual agents, some 486 in all.

Gods of the Neon City (Hidden Agent Motives) 2016 – 2017

Gods of the Neon City is a speech-enabled virtual reality game. You follow a private investigator with apparent hidden motives. Throughout the adventure you visit futuristic cyber-punk environments that reveal a conspiracy spanning several years that caused the character’s home city to flood. During the experience you unravel the agent’s goals and motives through your dialog choices, potentially reaching conflicting goals. The original intention of the project was to measure trust-building activities between humans and virtual agents. The game was directed by me and my colleague Alex Rayon, and was developed with the help of over 20 student volunteers.

Merlin (Speech Recognition in Virtual Reality) 2015 – 2016

As you open your eyes a young man stands in front of you. He waves his arms around, surprised, and perhaps scared. He approaches you cautiously, wanting to know your name and how you got there. He awaits an answer, and you start talking through a microphone. Thus begins your new virtual-reality adventure. This storytelling adventure showcases Inmerssion’s technology, taking you and Merlin through peaceful villages, haunted forests, and ancient ruins. Explore and discover new places, befriend Merlin, learn to cast spells, and make decisions that will change the course of your story and your relationship with Merlin.

Harry Potter (IBM Watson Smart Agent Integration) 2014 – 2015

Have you ever wondered what it would be like to talk to your favorite character? Inmerssion developed a virtual Harry Potter that you can talk to! People can ask him questions about his life and adventures through speech recognition.  Our goal was to create a natural interaction between a virtual agent and a human. I lead the development effort to recreate Harry Potter’s knowledge base, feeding IBM’s Watson all Harry Potter books for it to process, and on building a natural way for people to interact with it. This is great for Harry, and a great tool to promote books, video games, movies, or other performances, and has other uses for training and simulation.

Virtual Rapport (Adventure) Project 2012 – 2015

The agent, Adriana, leads the user through a series of activities and conversations while playing a game. The game simulates a survival scenario where you have to collaborate, cooperate, and build a relationship with the ECA to survive a week in a deserted island. This simulation is built with the intention to maximize rapport building opportunities, as well as to take advantage of the non-verbal behaviors in a more immersive environment, where both, the user and the agent can interact with the same objects in virtual space. The storyline allows the necessary flexibility and decision making, without creating a completely open environment where tasks would otherwise be difficult to set up and evaluate. In addition Adriana is capable of soliciting personal information and establishing small-talk in a carefully controlled environment.  The ECA is able to solicit information verbally by asking the user several questions based on adaptive questionnaires.

Familiarity (Vampire) Project 2012 – 2014

The research we report here forms part of a longer-term project to provide embodied conversational agents (ECAs) with behaviors that enable them to build and maintain rapport with their human partners. We focus on paralinguistic behaviors, and especially nonverbal behaviors, and their role in communicating rapport. Accordingly, this study piloted the investigation of how to signal increased familiarity over repeated interactions as a component of rapport. We studied the effect of differences in the amplitude of nonverbal behaviors by an ECA interacting with a human across two conversational sessions. Our main question was whether subjects would perceive more rapport with the agent in the increased-familiarity condition in the second session as having higher rapport.

Multiparty Agents (Enter the Room) Project, 2012

This is an exploratory study of what happens when a person enters a room where people are conversing, based on an analysis of 61 episodes drawn from the UTEP-ICT cross-cultural multiparty multimodal dialog corpus. We examine the reliability of coding of gaze and stance, and we develop a model for room-entry that accounts for observed differences in the behaviors of conversants, expressed as a state-transition model. Our model includes factors such as conversational task, not considered in existing social-force models, which appear to affect conversants’  behaviors. We then applied this model to a set of four embodied conversational agents that reacted accordingly when a person entered the room in which they were conversing.Here is where we first tested our virtual environment and virtual characters.

Multiparty

Grounding and Turn-Taking in Multimodal Multiparty Conversation, 2012

This study explores the empirical basis for multimodal conversation control acts. Applying conversation analysis as an exploratory approach, we attempt to illuminate the control functions of paralinguistic behaviors in managing multiparty conversation. We contrast our multiparty analysis with an earlier dyadic analysis and, to the extent permitted by our small samples of the corpus, contrast (a) conversations where the conversants did or did not have an artifact, and (b) conversations in English among Americans with conversations in Spanish among Mexicans. Our analysis suggests that speakers tend not to use gaze shifts to cue nodding for grounding and that the presence of an artifact reduced listeners’ gaze at the speaker. These observations remained relatively consistent across the two languages.

Multi - Mixed

The Lab

For most of these projects I worked at the University of Texas at El Paso (UTEP), with the Interactive Systems Group (ISG). The Immersion lab is the space features a projection room. Subjects stand in the middle of the room and interact with virtual agents while we record them with strategically placed cameras and Kinect sensors. The projection is displayed on a 15 by 10 feet wall, and the surrounding physical space is decorated to match the virtual environment of the current experiment. Lately the team’s efforts have been redirected into virtual reality applications.

ISG Lab

Advertisements