Predictive maintenance helps companies avoid expensive machine and system failures and allows timely ordering of spare parts. Dr. Olivia Lewis, Head of Data Science at The Unbelievable Machine Company, talks in an interview about entry barriers, necessary data know-how, and the opportunities for manufacturers presented by predictive maintenance.
Although predictive maintenance was one of the first use cases for big data, in practice we have only really seen isolated lighthouse projects. What is the biggest challenge in implementing such projects?
Olivia: IoT networks generate vast amounts of sensor data and ensuring the quality of that data is a big challenge. In the projects we work on we rarely encounter labelled datasets, which are vital in the implementation of predictive maintenance technologies built on supervised machine learning methods. Otherwise the data scientist’s mantra of “garbage in – garbage out” very much holds true. That’s why in practice it’s often a matter of first communicating what well-labelled data looks like, that is, data that contains information reflecting the so-called ‘Ground Truth’.
How should the data for supervised machine learning be labelled?
Olivia: You can imagine it as a table in which the columns “normal” and “abnormal” are ticked. A further improvement of the labelling or ‘Ground Truth’ would be information on whether the abnormality is of a defective or functional nature. However, labels specifying the damage or problem would be optimal. This is important in order to tell the algorithm at which point in time a machine is no longer functional. Only if we know when a machine is defective can we identify what pattern of behavior precedes the defect.
Of course, such labeling takes effort and its value only realized later once the benefits of predictive maintenance become clear. Is there a different approach?
Olivia: There are also methods that work without pre-labeled data. Often a lot can be achieved by using anomaly detection with algorithms of unsupervised machine learning. One example is a vibration measurement of extruder gears; thanks to expert knowledge from the customer and the spectra, we were able to determine the months in which a machine ran well. In more detail, it was possible on this basis to calculate the distribution of a Gaussian Mixture model of the normal state using an expectation-maximization algorithm (EM). The log-likelihood of the model then gave us a concrete anomaly parameter. This characteristic value indicates how large the deviation of the spectrum from the normal state is. This goes in the direction of outlier detection, where events and observations are identified that are considered “outliers” compared to the normal state. The anomaly parameter can then be used to determine that the deviation is too strong at a certain point; the point at which a human would say something is wrong here and we should check the machine.
It’s amazing that predictive maintenance algorithms can often make very accurate predictions. What did that look like in this case?
Using this mechanism, we found all the points at which a person would have identified a problem. We also found two examples that the experts overlooked, demonstrating that even if no labelled data is available, everything is not lost. In practice, such workarounds are more the norm.
The effort to create such algorithms for predictive maintenance is quite high and carries some cost. Is it worth it?
Olivia: It’s important to quantify the cost of a breakdown, for example, when a broken machine stops an entire production line. This also includes consequences such as additional expenses for special shifts. ROI analysis often overlooks the fact that the experts involved, who otherwise visually follow the machine key figures on the screen – or click through Excel tables – are also a financial expense; trained specialists are not cheap. In addition, even experienced employees might miss a few things.
Are there sometimes situations in which maintenance is very difficult?
Olivia: Absolutely. For example, we had an application scenario in an offshore windfarm. Replacing a ball bearing was not only very cumbersome, due to maintenance personnel having to travel by boat to the site, but there were also very few ships equipped with the necessary, and very expensive, crane. The wind turbines were equipped with audio sensors to enable predictive maintenance meaning we could receive the audio files directly from the offshore windfarm via radio.
How did you proceed?
Olivia: First of all, an expert checked the hub rotation noise and defined when a ball bearing breaks or is broken. With this input we trained a specific neural network for the functional state. As soon as there is a deviation from the normal state, this auto-encoder network amplifies this deviation. Such models learn to compress the data appropriately and – unlike supervised learning techniques – the models are optimized with regard to the quality of the data compression. The last layer of the auto-encoder then tries to reconstruct the input signal. Auto-encoders are particularly suitable for use as anomaly detection models. In this case, the compression is adapted to the frequently occurring “ordinary” data. The rare data are less important in the optimization and therefore show the higher reconstruction errors.
What result have you been able to achieve?
Olivia: Our solution allowed us to detect most of the deviations. Experience has shown that from the moment the ball bearings begin to vibrate untypically the wind turbines still continue to run for a few months. This means that a failure can sometimes be predicted up to a quarter of a year in advance. At the same time, the ship’s route can now be optimized because you know which other ball bearings also need to be replaced.
This post is also available in: German