Researchers Created An AI That Can Edit The Facial Expressions Of Actors In A Video

Researchers have developed an artificial intelligence system that can edit the facial expressions of actors, to accurately dubbed the movie. Another application of this AI is to correct the correct gaze and head pose in video conferencing. The researcher said, “the prior aim of this AI is to save time and money of filmmakers”.

The name of this system Deep Video Portraits. It is developed by an international team led by a group from the Max Planck Institute for Informatics in Germany and researchers from the University of Bath, Technicolor, TU Munich and Stanford University.



The team first present their research at the SIGGRAPH 2018 conference in Vancouver, Canada.

“This technique could also be used for post-production in the film industry where computer graphics editing of faces is already widely used in today’s feature films,” said study co-author Christian Richardt from the University of Bath in Britain.

Researchers have trained the neural networks of this AI to the edited video at high precision to make it easier to spot forgeries. Deep Video Portraits easily correct the movements of the face interior.




It can also animate the whole face including the moments of eyes, eyebrows, and head position in videos by using a function called computer graphics face animation. Deep Video Portrait is also capable of synthesizing a plausible static video background even if the object isn’t static.

Researchers Yeongwoo Kim from the Max Planck Institute for Informatics said: “It works by using model-based 3D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and head position of the dubbing actor in a video0.”




“It then transposes these movements onto the ‘target’ actor in the film to accurately sync the lips and facial movements with the new audio,” Hyeongwoo added.

An example of lips synchronizing (note that it is not the work of Deep Video Portraits ):

Deep Video Portrait has shown great result but this AI is currently slow at speed and time consuming, however, the researchers anticipate that the approach could make a real difference to the visual entertainment industry like in videos and VR teleconferencing, where it can be used to correct gaze and head pose.

As there are lots of chances that this software might be misused by the people, the author has no plans to make the software publicly available. In order to easily identify which video is real and which one is dubbed researchers will use clear watermarking schemes over the dubbed video.

“Despite extensive post-production manipulation, dubbing films into foreign languages always presents a mismatch between the actor on screen and the dubbed voice,” Professor Christian Theobalt from the Max Planck Institute for Informatics said.

“Our new Deep Video Portrait approach enables us to modify the appearance of a target actor by transferring head pose, facial expressions, and eye motion with a high level of realism,” Theobalt added.

Read also, you may like it :

Japan Introduce English Speaking AI To Improve Student’s Language Skills

Researchers working on An AI that can convert dog bark into human language

Japanese Researchers Build An AI That Identify Early-Stage Of Stomach Cancer

Will Artificial Intelligence Change How People Approach Creative Work?

MIT Researchers Created AI To Make Brain Cancer Treatment Less Toxic

DeepMind AI Can Detect Over 50 Eye Diseases as Well as Your Doctor Can



Leave a Reply

Your email address will not be published.