Disclosure: Nvidia is a client of the author.
Nvidia this month announced a series of enhancements to its Omniverse creation and simulation tool. Collectively, they more tightly connect metaverse instances with the real-world devices they emulate, assuring that any related digital twins are synchronized in real-time with their real-world counterparts, substantially increasing realism.
This will have several near-term benefits for remotely administering any solution covered by a metaverse simulation; it will also provide a shorter path to full automation and set a framework that should make that final step faster and more reliable.
Let’s explore the connected metaverse this week and why it will accelerate full automation.
Connected digital twins to the rescue
The concept of connected digital twins is critical to making simulations more realistic by using sensors to assure the twins realistically emulate their real counterparts. This would allow a remote (or even on-premise) administrator to better locate and assess problems before they lead to failures. For example, in the case of a bearing sensor that would typically be invisible to the human eye, sensors could translate a failure into a visual cue on the twin, highlighting the problem. (The admin could see the problem either through a metaverse instance virtually or by using AR glasses.)
Rapidly identifying equipment that’s out of spec and in danger of failure (from too much heat, noise, or vibration) would help with preventative maintenance and provide a richer support interface than a typical dashboard. That means a technician would more likely arrive on scene with the tools and part(s) needed to correct the problem rather than first making a diagnosis, then returning to fix the issue.
Adding artificial intelligence (AI) to mix
Nvidia also announced training AIs to help diagnose a problem and advise on how to correct it by using Synthetic Data to lower the AI training time. Take that failing bearing, for example — rather than just replacing one, it might make more sense to replace several other perishable parts at the same time to minimize disassembly and assembly costs. AI could determine, based on historical repairs, that the bad bearing is a precursor to other failures, allowing a tech to anticipate and fix future problems before they crop up.
For instance, non-critical repairs can often be more cheaply addressed if the tech is on site and already working on something else.
Next step: robotic repairs?
When you tie in Nvidia’s robotics efforts, repairs could bypass a human tech and use a trained robotic repair that the remote admin could trigger with an AI interface. Depending on what best fits the circumstances, the administrator could initiate an AI-automated response using equipment already on site, significantly speeding up the repair.
With that kind of system in place, the administrator’s role becomes simpler because the tasks are well defined and the triggers for them already fully instrumented and baked into the solution. You might not need an admin at all.
Moving to full automation
The path to full automation could take a decade or more. The first steps would be to fully instrument the areas to be covered, create connected digital twins of the infrastructure to be maintained, then use AI based on a combination of real and synthetic data to optimize maintenance and repairs. This data could be used as part of the training package for robots on site, while the administrative functions are automated; the latter should be the easiest part of the process.
Assuring data integrity and anticipating its eventual use for AI training would be instrumental to assuring a timely and effective rollout of subsequent functions. I expect the most difficult step to be automating repairs. Few systems are created today with the requirement that they be robotically maintained, but this will change over time.
I’ve toured sites that are pursuing the augmented-reality approach to maintenance, suggesting the initial move to connected digital twins may already be done on several sites. We now have a reasonably well defined path to fully automating data centers (which is what Nvidia demonstrated). This Nvidia video shows how you might initially use the metaverse to interface with a datacenter, and this one speaks to automating an entire site. Finally, this video showcases what might happen if an administrator had too much time and too little supervision.
Okay, that last one was a joke. But it does demonstrate that in the metaverse, rules don’t have to apply, eventually opening the door to innovations we can now only imagine.