Your Data Fidelity Study
Industrial Insight's Engineers have found very low data quality in every PI System.
This is not because the PI System isn’t capable of collecting high quality data, it is because the system and its individual tags are poorly tuned and don’t reflect the nature of the process.
In many cases, way too much data is stored and in others, not nearly enough data is stored. In the below picture, this was the main steam flow at one of our customer's facilities. Almost 600 raw data points were produced, but only 2 were being stored. Imagine doing a steam flow totalizer and each archived point is up to 2,500 pounds per hour different than reality. You won't even get close to the correct steam total!!!
We see this type of an anomaly at least once a month on something we are working on.
One of the main reasons people don’t trust their data is because the data compression algorithm is poorly tuned for the majority of the tags within the system.
The trust problem described above is only going to get worse when we get into Advanced Analytics, Machine Learning, and Artificial Intelligence initiatives.
Industrial Insight is offering a Data Fidelity Study starting with the OSISoft PI System and Rockwell Historian (PI based) Customers.
We have been using the CompressionInsight software in our Data Fidelity Studies to some of our core customers and found some shocking results.
The vast majority of people don't tune their tags properly and they are losing critical data; and this could be costing the facility a lot of money. We have also found that people have become frustrated over the years and set compression to zero, or worse, turned compression off. This means that they are storing way too much data which can create severe lags in the system when looking at long term trends.
Some of the biggest use cases that we have found where this is critically important are:
Utilization calculations that are not accurate
Missed process events which can lead to costly unplanned downtime
Not reporting environmental events
Below, these graphs taken from one of the data fidelity studies we have done shows that 47,000 + tags (almost 99%) of the tags have 0 for exception deviation, which can create more network traffic than necessary. Almost 31,000 tags have compression set at 0, so only exact duplicate values aren't stored. So, this customer is likely storing way more data than necessary as well. These are key indicators of a poorly tuned system.
We have used these same techniques to save more than $1MM plus for customers.
Several years ago, one of our paper customers was tracking high differential pressure events on a washer screen and found a very poorly tuned pressure tag. If all of the pressure tags had been tuned this way, we wouldn’t have caught any events. This was a $500k/year issue. Fortunately, we caught enough events and they found that they had a stock consistency issue.
We are also working with one of our customers on real time utilizations and real time costing, and we have found a number of poorly tuned tags, which has led to some very wrong numbers in daily, weekly, monthly, and yearly utilization numbers.
If you are ready to get started, contact us using the form below and we will be in touch!