AI software can dream up an entire digital world from a simple sketch

 

 

AI software can dream up an entire digital world from a simple sketch
AI software can dream up an entire digital world from a simple sketch

Creating a virtual environment that looks realistic takes time and skill. The details have to be hand-crafted using a graphics chip that renders 3D shapes, appropriate lighting, and textures. The latest blockbuster video game, Red Dead Redemption 2, for example, took a team of around 1000 developers more than 8 years to create—occasionally working 100-hour weeks. That kind of workload might not be required for much longer. A powerful new AI algorithm can dream up the photorealistic details of a scene on the fly.

Recommended for You

Developed by chipmaker Nvidia, the software won’t just make life easier for software developers. It could also be used to auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world.  

“We can create new sketches that have never been seen before and render those,” says Bryan Cantazaro, vice president of applied deep learning at Nvidia. “We’re actually teaching the model how to draw based on real video.”

Nvidia’s researchers used a standard machine learning approach to identify different objects in a video scene: cars, trees, buildings, and so forth. The team then used what’s known as a generative adversarial network, or GAN, to train a computer to fill the outline of objects with realistic 3D imagery. 

The system can then be fed the outline of a scene, showing where different objects are, and it will fill in stunning, slightly shimmering detail. The effect is impressive, even if some of these objects occasionally look a bit warped or twisted. 

Sign up for the The Algorithm

Artificial intelligence, demystified

Sign Up

Thank you — please check your email to complete your sign up.

Incorrect email format

By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.

“Classical computer graphics render by building up the way light interacts with objects,” says Cantazaro. “We wondered what we could do with artificial intelligence to change the rendering process.”

Cantazaro says the approach could lower the barrier for game design. Besides rendering whole scenes the approach could be used to add a real person to a video game after feeding on a few minutes of video footage of them in real life. He suggests that the approach could also help render realistic settings for virtual reality, or to provide synthetic training data for autonomous vehicles or robots. “You can’t realistically get real training data for every situation that might pop up,” he says. The work was announced today at NeurIPS, a major AI conference in Montreal, Canada.

The Nvidia algorithm is just the latest in a dizzying procession of advances involving GANs.  Invented by a Google researcher only a few years ago, GANs have emerged as a remarkable tool for synthesizing realistic, and often eerily strange imagery and audio. This trend promises to revolutionize computer graphics and special effects, and help artists and musicians imagine or develop new ideas. But it could perhaps also undermine public trust if video and audio evidence (see “Fake America great again”).

Cantazaro admits it could be misused.  “This is a technology that could be used for a lot of things,” he says.

Keep up with the latest in machine learning at EmTech Digital.
Don't be left behind.

March 25-26, 2019
San Francisco, CARegister now

This article was written by cool news network.

 

 

Get the latest news delivered to your inbox

Follow us on social media networks

PREV Inside the world of AI that forges beautiful art and terrifying deepfakes
NEXT Your smartphone’s AI algorithms could tell if you are depressed