Developing Prognostics Value - A Partnership

The power of the prognostics tool is based on data the customer generates already, for example process, SCADA and/or condition data. Hitachi ABB Power Grids’ stochastic model uses data histories and trends these into the future. When we have a larger fleet of assets in the model (say, 10 water pumps instead of one) we have increased data flowing into the tool as a basis. The data trend of the past becomes input for predicting future performance.

The prognostics are programmed from a configuration process we develop with the user’s experts. To digitise expert knowledge, our team works (over a series of workshops) to develop inputs for the solution. The discussion centers around questions such as:

  • What are the malfunction modes?
  • How are they defined?
  • How can they be detected?
  • How can they be mitigated?

These conversations extract the knowledge and experience of the organisation’s best people. Then, each one of the malfunction modes is correlated with data. For example, our team might ask, “How do you detect a bearing defect?” The expert lists the different data consulted such as vibration, temperature and equipment load. Having documented the experts’ diagnostic view, we apply the math to provide the prognosis for each unique malfunction mode.

Later, through scenario analysis based on the configuration and data prognosis, planners can explore the impact of operational scenarios, for example limiting equipment load. The system might be running at full capacity, but in the simulation, users can run the numbers to see “what if." For instance, the customer might decide to take on the residual risk of a radial bearing defect or thrust bearing defect after seeing that limiting load reduces strain sufficiently to allow the equipment to survive until the next scheduled intervention.

Validating the Power of Prognostics

Another common question is, “How do I validate the prognoses?” There are many ways to approach validation. The most reliable is historical analysis.

While working with one customer, we did a retrospective analysis of a gasket failure that had occurred on April 14th, 2018. The company did not anticipate a failure coming, and the malfunction triggered a costly, unscheduled downtime. Yet, when we ran the data retroactively, we were able to determine how much advance notice the customer might have had with our solution. The data on March 1st did not show anything but starting from March 8th, APM’s prognostic capabilities provided warnings of a data anomaly and forecasted when to expect the malfunction. In this case, the customer could have avoided being in a reactive situation. They could instead have made informed decisions to scope and schedule the maintenance intervention.

Still, it is important to note we do not need past failure history to train the models. After all, our customers usually do not run their equipment to failure. That is exactly why we have the configuration process to digitise expert knowledge from employees. The experts that have avoided malfunction events in the past train our solution to not only avoid malfunctions but also anticipate and avoid them in a more efficient or cost-effective manner.

In our experience, organisations start small because they first need to understand how to work with these prognoses. By focusing first on one asset type, the customer can identify benefits of the solution before scaling to a wider adoption. This also enables them to validate the solution one step at a time.

Conclusion

Finally, how much data is needed for the application to work and provide sufficiently reliable prognostic predictions? There is no one answer where we can say, “That is the exact cutoff point.”

Certainly, with data, more is better. Nevertheless, we have worked with customers who had only a few months of data history to begin with. Sampling frequency, the type of malfunction modes or failure modes we are looking at will also play a role. We are often surprised by how little data yields valid results. If we do find gaps, we can recommend a retrofit, although typically, the operators have enough data to start working and reap the benefits from our prognostic tools.

The essential point is that you don’t need to wait to start training an APM solution. Start with the data you have and you’ll be amazed at how the exponential power of machine learning can turn that data into your own early warning system – and give you the ability to stop a storm.

Hitachi ABB Power Grids Expert


Moritz von Plate

Moritz von Plate

VP Business Development
Hitachi ABB Power Grids

Moritz von Plate is VP Business Development at the Enterprise Software product group of Hitachi ABB Power Grids. He has global responsibility for the group’s Asset Performance Management business, ensuring that customers from various industrial verticals, such as power generation, transmission & distribution, mining, oil & gas, rail, or manufacturing benefit from the product’s unique capabilities.

During his career, he has held a broad range of responsibilities from management consulting to running an EPC contractor and being an entrepreneur with the data analytics startup Cassantec. His work covered many geographies globally, especially in North Americas, Europe and East Asia. This experience makes him very well suited for helping industrial customers with their digital journeys.

You may be interested in:

Card image cap

Putting the Power of Prognostics into the Users’ Hands

Knowing when machines will fail through the power of prognostic capabilities


Read more
Card image cap

Discover the power of prognostics in Power Generation

Success in power generation is dependent on the reliability of critical assets


Read more
Card image cap

It's not magic, it's math – seeing into the future of your assets

Realise the power of prognostics for Mining, Oil & gas and process industries

Read more