Our Blog

Mike Seymour on Digital Human

Mike Seymour

Mike Seymour. 
fxphd & fxguide.
Sydney University.

My interest in Digital human faces has arisen through fxguide and fxphd where we have reported on the work of many people that are a part of this project for years and now also as a central part of my own personal PhD. research at Sydney University.

In the simplest terms I am looking at digital faces and emotion. If we met and I offered to show you a picture of my wife, – you would expect me to show you a picture of her face. Even a poor photo of my wife smiling might cause someone to say “nice photo”. We are so obsessed by faces in our thinking and in our evolution. People will travel thousands of miles just to meet ‘face to face’, I know I do. Faces are just very different from almost anything else we model in the computer.

I also believe that emotion changes everything, and nothing is more emotive than a face reacting to you, interacting with you, communicating with you.

While I am very interested in the film and narrative application of digital faces, my primary interest is in interactive, real time realistic digital faces. This area of computing is called Affective Computing, and it is the notion of a computer reading signals from the user and changing its user interface and actions accordingly. Here the word Affect is virtually interchangeable with Emotions. Up until now most of this research has been on eye tracking, decoding emotions from speech and now with wearable computers, monitoring and interpreting bio-metric data. But for me the real interest is in what we then do with this information. If the user interface can decode the emotional state of the user (and this is by no means a ‘solved problem’) then equally we should explore the challenge of providing a user interface that uses a face to emotionally as well as logically communicate back to the user. I call this Computer Human Interfaces with Affective Computing, my kids call it ‘Siri with a face’.

We often think of Emotions as a negative. Have you ever said to someone – “stop being so emotional”.. “be rational” or “leave emotions out of this !”  In a Cartesian sense, effective efficiency is at the polar opposite end of a spectrum with emotions at the other end. Interestingly as has been pointed out by others such as Dr Rosalind Picard who pioneered this area of computing, this is simply not true. People with brain injures who have serious trouble with emotions are primarily dysfunctional, in our VFX/animation world we really bridge the duality of creativity (emotional) and Science (often depicted as rationality) but few understand just how important emotions are to communication and everyday decision making. Yet our computers remain emotionless and faceless.

While we live and learn with emotions our computers are modeled on the metaphor of desktops. I don’t know about you but I have never put a picture of my desk on my fridge or in my wallet. Not all computer interaction would benefit from an emotional component, let alone faces, but similarly it is hard to conceive none would benefit. In education, therapy and care computer systems with an emotional response could have vast and long term impacts. Do you know, for example, that if you were asked to test drive some new piece of software – with no emotional component, nothing fancy, nothing new – and you were then asked to grade the software, you would give it a better mark on your own laptop than if you borrowed mine? Even today without any explicit affective computing research has shown that we all anthropomorphize our laptops and would be less likely to grade down any software on our own computers vs someone else’s.

Of course, this is years off being fully realized but that is what makes the research so interesting. I believe that in this interactive mode, the Uncanny Valley is flooded by Emotion and our sense of the interaction is very different from our sense of just watching a pre-recorded clip. The interactive emotional feedback loop literally changes the Uncanny Valley and opens up a world of possibilities. Interestingly, it happens even when you are just present in the room as the computer reacts in real time to someone else communicating with it.

If faces are so key to human communication, if non-verbal communication is central to how we learn and if emotions allow us to function, why are they not used in computers? The Uncanny Valley and the general lack of computer power to solve this issue have been very important, but this is changing and we could be on the cusp of a vast expansion of facial technology in CHI. As a start, I believe emotion floods the valley and changes the equation but there is so much more, as we are hard wired to process faces we have vast amounts of understanding of our reactions to these new faces to explore. Specifically moving forward we need to learn more about what triggers our affinity and emotional connection to a character? What are we reading cognitively and what is important but more a sub-conscious interpretation? What makes us connect with an avatar and how far can we take that connection?

It is only the start of a brilliant journey and one that may have very wide and long term implications.

Tags:

Show Comments (0)

This is a unique website which will require a more modern browser to work! Please upgrade today!