top of page
  • Jim Gavigan

Digital Twin: Hype or Substance? Part II


I am sorry I am a week late in writing this, but I had a little distraction in a wind and rain storm called Irma. She decided to knock power out for me for almost 4 days, but I consider myself fortunate as so many lost everything. My heartfelt prayers are for those that sustained much more damage than I did and/or are who going through Maria's wrath right now.

So, down to business. I have done some more thinking about the Digital Twin concept and have talked with numerous other people. I am also still going back to the comment that John Murphy made to me about digital twins when I told him that I am not sure people will pay for the digital twin technology in certain industries. I gave him the example of a paper machine and the complex modeling required to make a digital twin of a paper machine.

John challenged me by reminding me that people make changes to parameters on complex machinery like a paper machine all the time without really understanding what the outcome might be. This costs money, especially if they are wrong, and the change they make to the machine parameters result in poor quality paper or excessive downtime. Shouldn't they be making these changes on the twin and making their mistakes there?

That is certainly a compelling thought and one I have pondered for quite some time. The conclusion that I have come to in talking with others and with my own experiences is that the shortcoming today of the digital twin is in fact the modeling side of things. There are industries like the Oil and Gas Industry that have done complex modeling for years and it seems to make sense from a cost perspective. However, I think that there are many industries where the costs versus the benefits are not in favor of performing the modeling tasks, at least not today.

In talking with someone recently about the GE specific content I shared in the last post, he explained to me that where GE (and others working in this space) struggle is in the modeling side of things. Getting the data, building the analytics, and designing visualization tools is the easy part. Machine learning and artificial intelligence algorithms get better by the day. Building a good model of the process is the difficult part and is what requires the most time and upkeep. This is typically where the companies promoting digital twins struggle.

For the digital twin to gain adoption, I believe that the modeling paradigm has to change. It has to get easier and cheaper. Models will need to be able to build themselves and let humans make minor modifications that the model automation didn't quite capture or get correct. If one just looks at the complexity of modeling a paper machine, where one has to consider hundreds to thousands of variables on the machine, plus parameters that need to consider time-lagged variables from upstream processes like pulping, bleaching, and refining, it can get mind-bending in a hurry. Modeling a single locomotive engine or wind turbine is trivial in comparison. Hence, why I stated that single and more simple assets were better targets for the digital twin as it stands today.

I am certainly not permanently shutting the door on digital twin technology for the customers I serve, but I do think that it will be well into the future before the bulk of the customers I serve can truly use the technology. I will be monitoring the airwaves for changes and breakthroughs in the technology.

For now, my conclusion on the digital twin is "Hype for lots of industries, and tremendous substance for some very targeted industries."

123 views0 comments
bottom of page