TPL 2

Grant Travis Ng
2 min readSep 26, 2020

9/26/2020

Combing the challenge from last week…

I focused on two things to explore:

  1. To combine body movement with facial movement.
  2. Finding out what what is most inspiring to me to use the facial tracking for.

Combining the two concepts

This is a video of me trying to combine motion capture data with facial capture data. The process was used with the Opti-track camera system. The recordings were performed by a comedian named Nick Mestad. He came into the RLab and did some character animations with various avatars. I decided to use one of his animations as a test to see if the character will move at the same time. What I found were a number of issues:

  • The NYU network works with VPN but cuts in and out. The facial tracking was getting choppy.
  • The skeleton of the model get mapped wrong. Need to find out what kind of skeleton Unreal and Opti-track like, might need to use motion builder.
  • Remote facial tracking stills works from a separate network and OSC still works as well.
  • Creating a OSC controlled virtual camera and other objects.

Applying Behavioral Techniques

Mirroring- An type of treatment used for people with ASD. Basically a way to teach physical movement in a playful way. If we were to teach someone how to pour a glass of water, the person mirroring would demonstrate the basic movements with their hands only and break the action down in individual steps. https://ufdc.ufl.edu/UFE0051353/00001

Echoics- A technique from speech pathology. This is treatment used for individuals with a speech impairment from either brain damage or a disability. When a certain word needs to be taught that someone cannot pronounce, echoics does the same thing on what mirroring does, but for words. Like for example “cookie” the word can be broken up into multiple pieces like “coo” and “key”. The phonetical process of pronouncing a word is broken up into parts. https://pro.psychcentral.com/child-therapist/2017/07/teaching-echoics-to-children-with-autism/

So after speaking with Dan, I wanted to try out what an echoic treatment would look like with my facial capture remotely on two networks.

So, here is what came out:

The delay was hard to deal with. Seeing the limited range of the model made me think redoing the mesh for the character. What I need to work on is having a face that has a much more complex range of emotions. Creating blend shapes is the technique that is required for this step. I should also use a much more abstract looking face and not have it so humanistic. There is an uncanny valley that I am getting from watching this video.

I really do like this concept of using the application for a speech pathology type of approach. With this being remote and available for free, an open source application would help the ASD community greatly. Since I am still fine tuning body movements, I could try and focus on getting the right kind of blend shapes calibrated for a more expressive and friendly face to look at or at least want to communicate with. User testing will be something I need to get into next.

--

--