Posted on Leave a comment

Applying Audio2Face to your 3D Characters

Audio2Face Metahuman

Using the Nvidia Omniverse Audio2Face kit users are able to generate real-time AI powered facial animations from a single audio source (See here for full Audio2Face breakdown). This animation then can be harnessed on any humanoid 3D character through Audio2Faces blend shape animation export. below is a collection of tutorials demonstrating streamlined pipelines for applying Audio2Face animations to your personal 3D characters in various 3D software’s.

Unreal Engine Metahuman Walkthrough

Below See Yeongho Seol from Nvidia’s detailed walkthrough of the exportation and assignment of the exported animation to Epic Games Metahumans in Unreal Engine 4.27.

In the video Yeongho demonstrates the simple pipeline of getting the desired animation onto Metahumans and how to then adjust the animation through control rig. Allowing for a fully adjustable and photoreal Metahuman facial animation pipeline.

Maya Character Transfer

See below a Maya character transfer pipeline walkthrough by TJ Galda, demonstrating how to connect characterized low poly meshes to the “Digital Mark” blend shapes and then export the recorded animation back into into Maya.

Iclone Character Transfer

Realillusion provided a brief walkthrough of the importation and transferring of iclone characters into Omniverse. They also show a quick run over the key parts of applying the desired mesh and wrapping process. Hopefully Realillusion will be releasing a more detailed walkthrough soon.

General Character Transfer

Lastly here’s a more general multi-pipeline character transfer tutorial by Edy Susanto Lim of Nvidia.

Leave a Reply

Your email address will not be published.