This week Foundry released “UnrealReader” their new Unreal Engine plugin for real-time visualisation of Unreal 4.27 scenes in NukeX. The plugin allows for connection of both software’s through the utilization of UnrealReader nodes within NukeX and a NukeServer in Unreal Engine 4.27.
The results are in. Which is faster, Unreal Engine or Unity, Omniverse?
At OSF we’re fanatical about understanding the virtual user experience. Understanding Frame Rates (FPS) when developing hardware to run interconnected virtual applications is important, as it enables outerface developers to build experiences that won’t ‘rip’ the fabric of the user experience, eat up bandwidth, memory, and kill FPS.
This week OpenAI, the AI research and deployment company released their latest API platform to the public without previous waiting-lists. Allowing AI developers globally ability to research, develop and publish using the AI text generation platform. Binding AI tools together into an AI creation pipeline, the makers of FABLE (see below) created this short animated character.
Earlier this week AMD announced the new AMD Milan-X CPU. This new chip harnesses the Ryzen 3D V-cache technology previously demonstrated in August this year. Providing users with similar features as the AMD EPYC 7003 series + a massive boost in L3 cache capacity.
Earlier this week Unity Software Inc, the real-time 3D game develop platform announced a $1.625billion acquisition of Weta digital , Peter Jackson‘s new Zealand based VFX and Technology company. The deal promises to bring the industry famous Weta Digital tools to Unity creators globally. The tools developed by Weta Digital have been used on the likes of “Avatar,” “Game of Thrones,” and “The Lord of the Rings,”.
In Unreal Engine 4.27 there’s the new LiveLink plugin, “LiveLinkXR” which allows users to bring in live data of trackers and HMD’s. The XR plugin currently only supports SteamVR but any VR devices connected can be imported. The plugin is extremely straight forwards to use just simply add an XR LiveLink source and select the desired devices (HMD, Controllers and Trackers) and see the green light confirming the engine is receiving live data.
Using the Nvidia Omniverse Audio2Face kit users are able to generate real-time AI powered facial animations from a single audio source (See here for full Audio2Face breakdown). This animation then can be harnessed on any humanoid 3D character through Audio2Faces blend shape animation export. below is a collection of tutorials demonstrating streamlined pipelines for applying Audio2Face animations to your personal 3D characters in various 3D software’s.
Nvidia’s Audio2Face Omniverse Kit harnesses the power of deep learning AI technology to provide real-time facial animation from a single audio source. Audio2Face allows artists to simplify 3D character facial animation and instantly generate facial expression’s and reactions from voice-overs. Audio2Face also allows users to retarget the captured animations to any 3D human or human-esque face whether realistic or Stylized.
See below a table of the relation of a 4k LED processor canvas to physical space and measurements of LED volume. The real space value is determined by both the pixel pitch and resolution of each panel.