The Effect of Animation Realism on Face Ownership and Engagement

Animation realism of virtual characters’ faces has been considered highly important for conveying emotions and intent [Hyde et al. 2013]. However, to our knowledge, no perceptual experiments have assessed the way that participants engage with their own animated virtual face and what are the influencing factors, when they see a real-time mirrored representation of their facial expressions mapped on the virtual face.


Introduction
Recent advances in facial tracking technologies have allowed us to create realistic animations of virtual faces that would function even in real-time. A number of systems have recently been developed for gaming and VR platforms, mainly aiming to track actors' expressions and using them for off-line editing.
Animation realism of virtual characters' faces has been considered highly important for conveying emotions and intent [Hyde et al. 2013]. However, to our knowledge, no perceptual experiments have assessed the way that participants engage with their own animated virtual face and what are the influencing factors, when they see a real-time mirrored representation of their facial expressions mapped on the virtual face.
Studies in immersive virtual environments have shown that it is possible to feel an illusory ownership over a virtual body, when the body is seen from a fist person perspective and when participants receive synchronous tapping on the virtual body and their hidden real body [Slater et al. 2009]. Similarly, when participants see their collocated virtual body animating in synchrony with their tracked real body, they can feel a sense of ownership and control over their virtual representation [Kokkinara and Slater 2014]. Here, we consider the possibility to perceive ownership and control over a mirrored virtual face with synchronous animated expressions to the tracked real face. * e-mail:ekokkina@scss.tcd.ie † email:ramcdonn@scss.tcd.ie Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for thirdparty components of this work must be honored. For all other uses, contact the owner/author(s). c 2016 Copyright held by the owner/author(s).

Experiment
We conducted a preliminary study, where 15 male participants interacted with an abstract-looking AI agent speaking in a robot voice on a screen-based virtual environment. The AI agent influenced the interaction in a way that would provoke head movements, speaking and facial expression from the participant. Head movements and expressions were tracked in real-time and were mapped on a realistic-looking virtual face model using the FaceShift tool (Figure 1).
A set of standardized questionnaires were used to assess the levels of perceived level of appeal, ownership and control over the virtual face, as well as the levels of engagement with the AI agent during the experience. Questions were rated on a Likert scale from 1 (totally disagree) to 7 (totally agree). In a follow up control condition participants will receive asynchronous feedback, using pre-recorded facial animations. We expect that participants will engage less and feel lower levels of ownership with the asynchronously animated face, compared to the synchronous condition tested here.

Conclusion
This study can provide valuable insights regarding engagement with self-avatars and facial animation. Observing believable representations of our own facial expressions on self-avatars can potentially have a big effect on an actor's performance and engagement with their role, while it can even drive the design of future games incorporating real-time facial animation.