After having covered the initial questions that production experts usually raise in early discussions in an earlier post, this time I want to delve deeper into more technological aspects of data analytics in a manufacturing environment.
1. In what cases does data analytics help leverage savings better than more traditional engineering and statistical methods?
Looking at the work habits (e.g. problem solving) and common tools (Excel, Six Sigma, etc.) of the majority of engineers and technicians today, the approach to optimization projects is most often one-dimensional. In the case of end-of-line scrap, for instance, test data is analyzed. This gives you the effect, but not the cause that improvement measures should be targeting, such as certain failure codes or failed test steps.
By using data mining techniques, you can consider test data (containing the information on which parts passed the tests and which failed) in conjunction with the respective process and quality data for the final product and for components. Add machine data, traceability data, environmental data, etc. into the mix. Then search for correlations to obtain new insights.
Specific algorithms aid in identifying multidimensional cause-effect relationships. For instance, end-of-line scrap rate increases with a certain failure mode. These relationships consist of: component A from supplier B, press-in force close to lower limit, and machine X shortly before next planned maintenance.
Having the right tools on hand to apply these algorithms is a basic enabler for this advanced analytics approach. Also, having the IT infrastructure and computing power to perform multivariate analyses within a reasonable amount of time is essential.
2. How do you convince experts to invest more in systems that help capture data so that analytics will eventually help?
The answer might surprise you: we try not to convince experts to invest in more data. In fact, we usually go the other way, and start with what you have. The amount of data is just one factor. Usually the one that provides a satisfactory answer. Don’t overestimate how much data you need.
Instead, quality and stability of sources, data quality in general, and validity are other very important factors that you need to focus on.
Beyond that, it is essential for the analytics partner team to understand your business problem. Also the technical process involved in your production (step) is important for you partner. Only then analytics and interpret findings can be applied correctly. The question “Has any machine, procedure, etc. been changed in the past two months?” is one that has made a difference many times. So instead of simply collecting more data, invest in educating your analytics partner as regards the business and technical process.
One more thing is important to prevent investing in data collection rather than in problem solving. Don’t put your trust in analytics projects that will take months before you see initial results. You are the only expert on your production process! Go iterative and discuss results with your analytics partner’s data, IT, and manufacturing experts. Talk to each other in short phases that take just days or at most a few weeks. This is your best investment in an analytics project that is aimed at solving your problem.
3. When do you calculate business cases and ROI for improvement projects? At the start or at a certain point of maturity?
The correct answer is: both! Of course our analytics team discusses the ROI mechanisms with the customer at the beginning of a project, i.e. the business understanding phase. They want to make sure they will be working on a promising business case. One example could be the status quo and expected target state of scrap rate or unplanned downtime for a certain machine.
As soon as we start working with the data and learning more about its potential, we quickly identify whether this supports the initially defined ROI. This is reevaluated throughout the entire project and discussed in regular feedback sessions with the customer. This iterative customer-oriented approach is helpful in managing expectations on both sides.
To keep project risks low, we usually act in short, iterative project phases of just a few days each. At the end of each phase, we meet with the customer. We then discuss and decide whether and how to move into the next one. This has proven to be a very successful method for meeting our customers’ expectations at all times.
4. How do you find the right algorithm for each case?
Keep in mind:
On-site training and consulting for engineers are important in helping them to understand the basic principles of data analytics. And to evaluate the quality of the models applied to your infrastructure and process.
To find the right approach – i.e. analytics strategy, technologies, and algorithms – for an individual customer problem, it is essential to understand both the customer’s business case and the physical problem behind it. The physical problem might be certain failure causes that lead to machine breakdowns. In this case, experienced data analytics scientists can pre-select some potentially feasible algorithms for solving the customer’s problem. The final decision is then made based on a model evaluation of the pre-selected algorithms. For example, showing how accurate each algorithm’s or model’s predictions are.
5. Do you perform an acceptance test for a data analytics model?
At the beginning of an analytics project, we work with the customer to define the objectives with regard to prediction accuracy, latency, etc.. We then establish a clear understanding of the business case and the technical and physical background of the customer problem.
These objectives are applied when it comes to evaluating the latest results and obtaining project acceptance.
After a prediction model is deployed in the customer IT environment or product (e.g. server, SPS/PLC, microcontroller of a chain saw), continuous “acceptance tests” (we call this model monitoring) are required. This ensures that the model is constantly predicting within the defined accuracy limits. If those limits are exceeded, the model has to be retrained (i.e. the model parameters have to be readjusted).
The target prediction accuracy depends largely on the use case and the risk associated with inaccurate predictions. 90% or more is often realistic. In general, however, it’s important to realize that 100% accuracy can never be achieved. But depending on the production quantity, even 0.1% wrongly classified NOK parts is unacceptable. At this point we usually talk with production and quality engineers about how to catch these 0.1% misclassified parts. Other measures in the quality firewall (e.g. based on existing process and product FMEA) apply here.