Advertisement

Doubling Down on Digital Twins: A New Approach to Get Operations Moving

By on

Click to learn more about author Patrick Hubbard.

Risk can be a meaningfully paralyzing concern for IT. Sometimes we fear that even looking at a system, much less making configuration changes, however badly needed, might cause it to fail in spectacular and career-limiting ways. But what if we had specimens of our most critical systems to experiment with, test out crazy ideas (like continuous delivery), or simply to provide a sandbox for practice basics like failover and recovery? Adding digital twins to your operations bag of tricks reduces risk, enables more complex integration, and creates new freedom for innovation.

Digital Twins Turn Monitoring into Science

Industrial machines have long been hooked up to sensors and connected through telemetry for monitoring and observability. From these actions you could, for example, get information about a wind turbine’s performance—temperature, harmonic vibrations, power generated, direction of rotation, angle of the blade, etc.

And while well instrumented, traditionally, any changes to tune the system would have to be done in real time on the real machine. That’s great if you have disposable peers for that machine, but a problem if they’re unique, or at least uniquely assigned.

A digital twin is essentially a virtual device that reflects the exact state, information, and organization of the physical device to which it’s connected. It’s a living, telemetry-driven model of the material entity, both simulating operation and evolving with the physical source system it models. If the model is close enough, you can test changes on the model first, observe simulated changes to the physical system, then decide whether to apply the changes.

In the wind turbine example, let’s say you are thinking about shifting the physical blade angle to test effectiveness at generating power in a specific temperature/humidity band. First, the operations team makes the changes to the virtual configuration in the digital twin. Then, by observing the digital twin using the familiar metrics used to monitor the primary system, you can measure potential vibration, compare generation results, set performance expectations, and then safely apply the new blade angle routine to the generator.

Monitoring is the key to ensuring the digital twin is behaving as expected and to knowing that it is, in fact, an effective analog to its physical counterpart.The more you quantify the quality of how identical the digital representation is to the physical counterpart, the less risk when making physical alterations. As is always the case with monitoring, high accuracy and fidelity are how admins ensure their changes are beneficial and successful.

Where Do Digital Twins Come From?

For some time, due to the cost and scale of their machines, heavy manufacturing has led the charge to create and maintain digital twins. Operations teams enable telemetry from multiple systems to feed the data in real time into simulations built by product teams, while IT is responsible for making sure the infrastructure is providing the metrics and transport that keep the digital model working.

With enterprise technology, technology professionals are also building digital twins: setting up and configuring identical systems to experiment on. Advanced teams wield the most sophisticated versions—model-driven digital twins. Led by developers, these environments allow programmatic instrumentation of multiple digital twin models. However, that level of automation is primarily in place only with big players like Google and Amazon, where they may test multiple models driven by machine learning.

Digital Twins, IRL

It’s increasingly easy to find examples of companies who rely on digital twins. Though under fire for privacy practices, Facebook’s use of digital user twins is a good, if Orwellian, example. The artificial version of its users, gained through massive amounts of data and sophisticated twinning, allows the company to simulate behavior that has not yet been exhibited. This testing begets the targeted marketing that’s become its bread and butter.

And for those of us who maintain real systems for main street businesses, implementing a digital twin or two can create efficiencies and add business value. That’s especially true as we move to an increasingly software-actuated infrastructure. 

We’re going to ask new questions and receive unexpected answers. At the same time, the potential scope and speed of change errors increases due to automation. Digital twins will be particularly helpful for novel occurrence remediation.

Even for fundamental operations chores, we believe that potential lies in digital twinning of critical systems, such as backup, or unifying, or at least co-presenting, critical service delivery metrics from test and production systems.

Greater access to deep learning tools can also allow smaller companies to take advantage of digital twins. In fact, if you read the tea leaves of recent Amazon and Microsoft announcements, you’ll notice rapid delivery of bot services and tools that are first steps toward twins as a service, or at least a toolkit and skills to build your own.

Unfortunately, many organizations simply don’t have the in-house expertise or perceive a need for digital twinning. Others are legitimately worried about significant custom development and new technology.

But surprisingly, for most enterprises, the first step of digital twinning may be taken today with immediate benefit: building the right monitoring and telemetry into applications at the beginning instead of layering it on after the fact in operation.

With more real-time data available, IT, at a minimum, can demonstrate to leadership the value of lab and tools investment and identify which systems would benefit most from safe experimentation.

And that’s really the point. Experimentation is a good thing, and in IT is a very good and badly needed thing. Too often we know that, but are held back by risk and fear. What will the traffic effects be if I change the caching strategy for my application? Will traffic route in unexpected ways during a failover? What happens to my digital experience monitoring metrics if I move content to a CDN? If I move an application to a new storage pool, will it become a noisy neighbor to everyone else on the LUN?

Are these examples as sexy as wind farm composite blade harmonics? Probably not. But they’re even more important to our users and leadership. And while some remain skeptical of the concept of digital twins, they’re increasingly used to improve the management and monitoring of real-world systems.

Perhaps one day companies may depend on digital twins as much as they do any other critical IT technology. Digital twins help deliver that most magical IT trick: accelerating change by removing risk. It’s real freedom for developers, IT, operations, and, ultimately, the business to do what leadership is asking for in the first place—innovate.

Leave a Reply