A recent press release reports, “NVIDIA today introduced groundbreaking AI research that enables developers for the first time to render entirely synthetic, interactive 3D environments using a model trained on real-world videos. Company researchers used a neural network to render synthetic 3D environments in real time. Currently, every object in a virtual world needs to be modeled individually, which is expensive and time consuming. In contrast, the NVIDIA research uses models automatically learned from real video to render objects such as buildings, trees and vehicles. The technology offers the potential to quickly create virtual worlds for gaming, automotive, architecture, robotics or virtual reality. The network can, for example, generate interactive scenes based on real-world locations or show consumers dancing like their favorite pop stars.”
The release continues, “The result of the research is a simple driving game that allows participants to navigate an urban scene. All content is rendered interactively using a neural network that transforms sketches of a 3D world produced by a traditional graphics engine into video. This interactive demo will be shown at the NeurIPS 2018 conference in Montreal. The generative neural network learned to model the appearance of the world, including lighting, materials and their dynamics. Since the scene is fully synthetically generated, it can be easily edited to remove, modify or add objects. ‘The capability to model and recreate the dynamics of our visual world is essential to building intelligent agents,’ the researchers wrote in their paper. ‘Apart from purely scientific interests, learning to synthesize continuous visual experiences has a wide range of applications in computer vision, robotics, and computer graphics,’ the researchers explained.”
Read more at Globe Newswire.
Image used under license from Shutterstock.com