The Creative Console by Monogram is a modular desktop device designed to improve productivity and creative workflows. The Creative Console allows users to work in a variety of applications with ease whether its editing pictures in Photoshop or video in PremierPro, Creative console works with all.
And now Monogram has entered virtual production with their latest Unreal Engine 4.26 plugin. The plugin is designed around reducing barriers between direction and crew on set, they aim to achieve this by allowing the connection between Monogram Creator software and Unreal Engine. The Creator software now allows for the mapping of modules to control in-engine features and assets whether that’s a virtual camera or dynamic light fixture.
Lighting can now be controlled via Monogram Creator allowing users to adjust intensity, pattern, and color through orbiters and dials. The creative console also spans into the virtual camera department with the ability to adjust camera setting through the simple spin of a dial. Focus, zoom and aperture can all be adjusted through Monogram. Once ready to record the monogram creator can also be mapped with its sequencer and take recorder integration, enabling all controls for shooting to be in one designated deck.
Setting up Omniverse Unreal Engine Plug-in (Connector)
We take a first look at the NVIDIA Omniverse rendering platform and how to connect it to Unreal Engine. To start you’ll need to find and install the Omniverse launcher, Omniverse create and Unreal engine connectors. Theses are located in the “Exchange” tab underneath Apps and Connectors in the NVIDIA Omniverse launcher application that you download from here.
Once all software is set up (we run these tests on the OSF Godbox UE workstation with NVIDIA GPU) and you have your accounts made, decide on which machines are going to run what software. We suggest running each software (running Create app on one and UE on the other) on an independent machine.
On the Omniverse machine we’re going to create a local host server by going to the “collaboration” tab and selecting the “local nucleus collaboration service” settings option. This leads us to a web interface which allows us to control our local host and build new connections.
In the connections section of the web interface create a new connection called “LocalHost”. Then navigate to the same place on the Unreal Engine machine and select “Add” this time when it asks for a server name enter the IP address of the Omniverse machine and sign in with the same details.
Next launch Omniverse create and your Unreal Engine project on their specific machines. If the connector is correctly installed there should be a Omniverse Icon on the tool bar within UE. Under the icon is a drop down with the option to “Add server” in this box put the IP address of the Omniverse machine, this should prompt an engine notification stating either connected, or failed. This also creates a “Omniverse” folder in the content browser with your Omniverse servers IP and file paths.
Once both servers are connected, right click the assets or map you want to collaborate on and select “export to Omniverse”. Then select the Omniverse server’s IP in the Omniverse folder and where you want the selected assets to appear on both machines.
Open the imported map within Omniverse Create. Once the map is loaded and compiled, navigate into layerand select the grey cloud icon next to the root. This makes all assets within the map Live and editable on both machines. Activate Live updates on the Unreal machine by ticking the “Live Edit” option on the Omniverse dropdown.
We’re 98% there with a connection between both machines, live edits and shared assets. Now we just have to run the UE map as a USD. This is created automatically when exporting for Omniverse and is in the same place we exported the map previously.
With the release of Unreal Engine 4.26 comes a wide range of virtual production specific features from new nDisplay tools to improved cinematic renders. This article will walk you through all the latest features for VP.
To Start Unreal engine 4.26 now has multi-GPU support with Nvidia NVLink. This allows for larger LED volumes without the issue of delays or display issues. Nvidia NVLink allows for high speed data transport between multiple GPUs and the ability to choice which viewports to be rendered by specific GPUs. For example the frustum can run off one GPU whilst the rest of the LED volume and scene is rendered from another. The GPU rendering setup is done through the nDisplay Config file, simply add a gpu_node parameter for the desired viewport.
nDisplay Configuration Viewer (Beta) is also a new tool for LED setups, the configuration viewer allows operators to see a tree hierarchy of all render nodes, windows, viewports, inputs, cameras, and projection policies. Each component comes with its own details panel.The projection policies can be seen in either 2 or 3Dallowing for easy setup and trouble shooting.
New Integration and support for nDisplay VIOSO and DomeProjection(experimental) aids users in setting up projector warping and soft edge blending on highly complex surfaces. Native VIOSO and DomeProjection format files can be loaded by using “Native SDK integration” projection policies in the nDisplay Config file.
New and Improved Sequencer
In 4.26 Sequencer has received a major optimisation through re-working its internal architecture. Sequencer has been redesigned to handle large-scale cinematic’s and concurrent UI animations without damaging performance.
In the old sequencer architecture each track had its own runtime instance that contained both the data and logic required to evaluate and apply properties, Although this method doesn’t scale efficiently and can cause serious performance issues with the more tracks and their complexity.
Unlike the new version which is designed with optimisation in mind. The team at epic wanted scalability, concurrency and extensibility. Sequencer has been redesigned to evaluate data in a way that is cache-efficient and transparent through an evaluation framework based on the entity-component-system pattern. The data and logic for each track are decoupled; entities now represent the component parts that relate to the source track data, sequencer systems now operate on all data that matches their query in batches. This allows for cache-efficiency, resulting in single virtual function calls and few cache misses regardless of how many transform properties are being animated.
Movie Render Queue
Epic has added new workflows and exporting capabilities to the “Movie Rendering Queue” or “High-Quality Media Export”. The Movie Rendering Queue now supports several new workflows including:
Support for Final Cut Pro XML – EDLs
Runtime support for implementation into your own application
The render passes are supported by Movie Render Queue in 4.26 are:
Matte IDs (Limited) / User Defined Layers
Users can now have three additional export codecs:
Apple ProRes (high bit depth 10-12bit)
Avid DNxHR (810-12bit)
Multi-Channel EXR (16, 32 bit)
To see the Full Unreal Engine 4.26 release notes click here.
The ugly cousin of virtual production is system tuning, it’s not virtual production as such, but system tuning is required to make a virtual production system work. On-set, fast, accurate Virtual production system tuning a crucial part of the virtual production work-flow.
This post will also help other crew members to understand what the VP crew are doing.
From our side as the on-set VP crew, knowing how to integrate components swiftly, knowing what settings to change and how to optimise the system often between shots, thats our job. It’s the VP team working as a unit who are responsible for maintaining a low-latency, stable, performance of the VP system. If your VP crew are asking for access to the camera or lens gears, or if they need a few moments to adjust tracking, here’s what the VP crew are up to.
All systems suffer from bottle-necks, sync issues, networking errors and latency problems. Althoughsystem’s assembled from consumer grade components will give consumer grade results. Most consumer grade systems don’t support lens calibration, encoding or genlock / timecode. This inevitably leads to constant on-set firefighting.
A Professional system won’t 100% eliminate the firefighting and tuning process but its integrated softwares and hardwares provide tools to optimise tuning easily live on-set. A professional system achieves this stability and optimisation though its hard battle tested on-set components from its computer at the heart through to the absolute camera tracker. Although a professional system can cost in the region of £100,000+, this would typically be a full service system coming with remote/onsite training, setup and hardware/software support.
On-Set System tuning?
Every system suffer from the same issues regardless of its price. Your main enemies will be frame synchronisation across the system (hardware and software), tracking and chroma keying(real-time compositing). Each one of these will have to be checked before each shot, as the DOP runs across the set and the lighting, tracking perspective and location changes the system will have to be briefly tuned and checked. If not checked a slippy 2 frames, jittery tracking or hard-edge chroma can break any illusion of reality, ruining the shot.
Doing your system check in-between shots, is where professional systems come into their own. The professional systems are designed around on-set use and the optimised tuning, teamed up with a good operator drastically cuts on-set system tuning times, to the point where its almost done on the fly and just requires 5 simple sharp pans from the DP. System stability and checks with a consumer level system take much longer to fix,as they don’t easily allow for fast fix iterations.
Knowing where an issue is in your system is the key to fast on-set tuning. Knowing the difference between a tracking issue or a synchronisation by just looking at a monitor, you’ll need to be as fast with your diagnosis as you will with your implementation. Theless you need to worry about the more you can concentrate on the tuning issues at hand. This is one reason why we developed the OSF virtual production computers, specifically designed for real-time production computing, at least we know it’s never our computers that needs the tune up.
Virtual Production spreads over into many hardwares and softwares, whether you’re working with Ndisplay powering graphics on LED screens, or virtual scouting in Unreal Engine, VP covers it all.
But, at the heart of it all this is computers powering and enabling the whole process. The engineering of that core component can make or break a production. So we’re made a guide for which GPU’s should be powering your shoots and why.
Professional Grade GPU for LED display synchronisation
Every virtual production developer dreams of having the Nvidia Quadro RTX 8000 but it doesn’t come cheap this graphics card will set you back just over £6,000. The card is made and designed with the professional use cases in mind, such as running multiple display technologies on-set. It’s built to a server grade with reliability and performance at its core. So if you can afford one, stop reading and go buy one. Otherwise, here’s a rundown.
The Quadro RTX 8000
The Nvidia Quadro RTX 8000 is the top of the range server grade graphics card which is perfect for high end virtual production, especially for sync of display systems where frame accurate distribution is required, such as in-camera real-time VFX.
The card comes with maximum 48GB of ultra-fast GDDR6 video memory for high performance processing times, and 72 RT cores (yes 72) which will be harnessed by Unreal Engine allowing freedom in lighting and for raytracing in record times with its 11Giga Rays/Sec performance.
The Quadro RTX 8000 also comes with 576 tensor cores which are used in common applications like DaVinci Resolve and the Adobe creative suite, opening up your abilities on-set further than just for Unreal. So this card will rip through on-set rendering in Maya, C4D and post jobs too.
On-set its essential to have a server grade GPU with a Sync II port for frame accurate display rendering especially with LED. This is vital for the ability to genlock a render machine or node machines with all other equipment onset. Including the cameras, recorders and even audio equipment.
For example when working with LED we have to render each panel on a separate render-node (a PC specifically designed for the task). In order to avoid glitches and delays, the GPUs have to be genlocked to render at the exact same time.
The Nvidia Quadro 8000 is designed to a server grade standards
The architecture of the card itself is built with reliability and durability in mind, for example the way cards ventilation allows for air to be passed back through the case of GPU to outside and down to the way to cards power distribution and consumption, that is all rigged for hard-core use.
Designed for professional and constant on-set or in most cases server rooms, its advanced cooling allows for days if not weeks of constant use without any damage to performance. This is why we use Quadro grade GPU in our On-set GOD BOX™ machines which are also designed specifically to run all day under load on-set.
The Nvidia Quadro RTX 8000 is able to be combined with multiple GPU over NVLink this currently isn’t relevant to virtual production in Unreal Engine as UE doesn’t support mutli-GPU processing, although its a feature being worked on and is firmly on Epics to do list. Multi-GPU can be used in many other rendering application though, so its cool to have as a feature.
The RTX2080ti is a Developer Indie Level GPU for VP
The RTX 2020TI is a middle of the range consumer GPU option from Nvidia. This is a common choice for most folks looking to get into virtual production development, its got a decent 8GB GDDR6 video memory which will allow for semi-complex scenes to be rendered in real-time without too much of a hit in frame rate.
The 2080TI is realtime ray tracing enabled with 8Giga Rays/s
This GPU is a good virtual production development level tool but thats about it as far as professional on-set use goes, as when you open up a heavy level your FPS will seriously drop. Open up two view ports and you’ll be lucky to hit 60FPS in each. This is fine for testing and RND although it wouldn’t be taken on-set, not by us anyway, as it doesn’t have a sync port and can’t be genlocked with the rest of the onset GPU’s. On smaller indie productions with just one render engine and one display, this wouldn’t be so much of an issue. However a 2080Ti isn’t designed with the intense use cases as the Quadro range.
The RTX 2080ti have less engineered cooling systems where air is pushed out into the computer case itself, not out of the box this pushes more need for internal fans.
Hell, if you stacked a number of them in one box like many offline-renderers do for say Octane, you need to essentially build a jet stream of air to expel the heat from your case, other wise you’ll get performance drop as the temperature of your CPU and GPUs rises beyond your bios settings. AKA even more FPS loss and lag.
This GPU comes in at roughly £1,199 a fair price for the performance which comes with it and this GPU is as we say great for Independent set ups and development. It will do 90% of what a developer needs without any issues and as a basic UE Artist or Developer needs it will be fine in any studio or bedroom studio, but if you want to bring your machine on-set, go Quadro.
Here’s a rundown of the Quadros, prices, basic specifications: