Unlike the consumer metaverse, which was largely backed by Facebook and was effectively dead on arrival, the commercial metaverse is doing rather well. Nvidia is currently the darling of Wall Street and is the primary driver for the commercial metaverse. But creating this industrial-grade metaverse and the digital twins of real objects in it requires a lot of heavy lifting in terms of 3D scanning to duplicate real objects in this virtual environment.
To address this problem, Nvidia has come up with several tools to speed up the creation of a metaverse instance that can be used to simulate the real world. The latest of these is Neuralangelo. Neuralangelo takes 2D videos and converts them automatically into 3D assets with intricate details and textures, making the virtual copy nearly indistinguishable from the physical object that was copied. The scale ranges from small objects to full-sized buildings, so this tool is unusually capable.
Let’s talk about Neuralangelo this week.
2D files to 3D objects
The problem with many of our most advanced technologies, from AI to the metaverse, is the time it takes to create the related data sets and models. Anything that can be used to significantly reduce the time needed to create these models and datasets flows directly through the project and has a material impact on how quickly the result can be made useful.
Say you wanted to recreate a crime scene, or virtually explore a building collapse after the fact. You might have lots of 2D videos to work from, but no one is likely to have scanned the subject matter in 3D. Being able to use existing 2D video to create 3D objects and environments not only opens the door to the rapid creation of a metaverse instance for planning, but could also be used to explore past events to determine problems or faults.
For instance, in the recent apartment building collapse in Iowa, the building is slated for quick demolition, as it has become unstable. But determining the cause of the collapse and those at fault has not been completed, making it critical that some record of the building remains after demolition so that unanswered questions about why it collapsed, whether the response was adequate or appropriate, and even whether all the tenants of the building have been identified, can be answered.
With a tool like Neuralangelo, the various pictures and videos of the building could be used to virtually recreate it and allow forensic investigators to explore the virtual building in the safety of their offices long after the building is demolished.
This is only one of several tools that will be presented from June 18-22nd in Vancouver at the Conference on Computer Vision and Pattern Recognition (CVPR). One of the other interesting offerings that Nvidia will present is called DiffCollage, a diffusion tool that is designed to create large-scale content. This would be useful for movie backdrops or very large-scale renderings like those that might be needed for an amusement park or cityscape.
Creating the commercial metaverse
Nvidia’s success with the metaverse is impressive. Its Omniverse tool is currently the leading tool for most autonomous machine simulations and training, including autonomous cars. But the creation of these metaverse elements remains work-intensive, making it necessary for ever more automated and intelligent tools to create these elements more quickly and inexpensively.
Neuralangelo and DiffCollage are two such tools coming out of Nvidia’s broad effort to help companies and governments spin up metaverse instances that can be used for simulation and testing, thus giving users of Nvidia’s Omniverse tool faster time-to-value.
It is efforts like this that are creating the commercial metaverse of tomorrow and assuring that, at least in the commercial space, the metaverse isn’t just real but also incredibly useful.