Nvidia research scientists have developed a tool called GANverse3D - an Nvidia Omniverse extension which can be fed a standard 2D object image, a single photo, and create a realistic 3D model. These full 3D models can then be visualised and controlled in virtual environments in Omniverse. Nvidia reckons this rapid 3D modelling tech could be a boon to architects, creators, game developers and designers. While the 3D models might look a little basic (reminds me of N64 graphics) they can be polished up later by 3D artists if required, after prototyping ideas.
The above work comes via the Nvidia AI Research Lab in Toronto. There they have been developing GANverse3D to "inflate flat images into realistic 3D models". GAN is an abbreviation of 'generative adversarial network' an increasingly popular AI and ML technology. Combined with real-world data sets the GAN can do a pretty good job of knowing the unknown / unseen aspects of an object. One of the key attractions and advancements of Nvidia's GANverse3D is that previous models for inverse graphics have relied on 3D shapes as well as multi-angle image training data.
Before feeding the single image of the Knight Rider KITT car to the Nvidia GANverse3D, the researchers trained the AI with a library of car images, taken from all angles. No 3D datasets, as mentioned above, but the 3D mesh model result looks pretty good considering the ease of workflow. Nvidia added GANverse3D generated wheels and headlights too. Brought into Omniverse with Physx and real-time ray tracing applied, a developer could get a good feel for driving KITT in a virtual world quite quickly.
Further details about GANverse3D are available via the Nvidia Blog and the full ICLR paper, authored by the Nvidia researchers.