GauGAN2 allows users to write minimal phases like “Sun Set at Beach” and the deep learning AI willl respond in real-time generating a custom scene relevant to the phase. Due to the AI being built on a generative adversarial network model, the phase can then be edited to provide real-time scene changes for example “afternoon” or “rainy day” will adjust the scene accordingly.
The GuaGAN2 demo is one of the first deep learning AI models to combine multiple modalities, GuaGAN2 provides the ability to generate segmentation maps, inpainting and text-to-image generation all in real-time and 100% adjustable.
The ability to generate both text-to-image and segmentation maps in real-time means users can change from text generation for the general environment setting and then personally add desired features through drawings and sketches. Whether them features are trees, rocks or sky.
model behind GauGAN2 was trained on 10 million high-quality landscape images using the NVIDIA Selene supercomputer, an NVIDIA DGX SuperPOD system that’s among the world’s 10 most powerful supercomputers. The researchers used a neural network that learns the connection between words and the visuals they correspond to like “winter,” “foggy” or “rainbow.”
Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher quality of images.
Data Sculpted Art: