With just a text describing the picture, the artificial intelligence will draw a lifelike picture by itself using GauGAN technology. The deep learning model behind GauGAN allows anyone to turn their imaginations into authentic masterpieces more easily than ever before. Just enter a phrase like “sunset on the beach” and the AI will generate a composition in real time. Add an extra word like “sunset at a rocky beach” or change “sunset” to “afternoon” or “rainy day” and it instantly modifies the image.
With the touch of a button, users can create a map showing the location of objects in the image. From there, they can move on to painting, editing scenes using labels like the sky, trees, rocks, and rivers, combining their doodles into stunning compositions.
GauGAN2’s new text-to-image feature can now be experienced on NVIDIA AI Demos, where you can experience AI through the latest demos from NVIDIA Research. GauGAN2 allows users to create and customize scenes faster and with greater control.
GauGAN: Painting based on text content
GauGAN2 combines segmentation mapping, color painting, and text-to-image generation in a single model, making it a powerful tool for creating realistic art with text fusion. and drawings.
This demo is one of the first to combine multiple methods – text, segmentation, outline and style – in a single GAN framework. This makes it faster and easier to convert the artist’s vision into high-quality AI-generated images.
Instead of needing to draw out every element of a scene, users can enter a brief phrase to quickly create the main subjects and subjects of a photo, such as a snow-capped mountain range. It can then be customized with sketches to make a particular mountain taller or add a few trees in the foreground or clouds in the sky.
It doesn’t just create realistic images – these artists are also able to depict otherworldly landscapes.
Imagine, for example, recreating the landscape from the iconic planet Tatooine in the Star Wars franchise, where there are two suns. All you need to write is the words “sun on desert hill” to create a starting point, after which the user can quickly sketch a second sun.
It’s an iterative process, where each word the user enters into the text box adds more objects to the AI-generated image.
The AI model behind GauGAN2 was trained on 10 million high-quality landscape images using the NVIDIA Selene supercomputer, an NVIDIA DGX SuperPOD system that ranks among the top 10 most powerful supercomputers in the world. The researchers used a neural network to learn the association between words and corresponding images such as “winter”, “fog” or “rainbow”.
Compared with modern models dedicated to text-to-image applications, the neural network behind GauGAN2 produces more diverse and higher-quality images.
NVIDIA Research employs more than 200 scientists globally, focusing on areas including AI, computer vision, self-driving cars, robotics, and graphics.
I think we should add artists to the 5 professions that will be replaced by AI.