Home
 > Research and Reports > TEC Blog > Process Manufacturers--Great Batch, Every Batch

Process Manufacturers--Great Batch, Every Batch

Written By: Olin Thompson
Published On: August 30 2004

What Makes a Batch Great or Terrible?

The definition of a "great batch" or a "terrible batch" will vary from business to business or even product to product. The measurement encompasses yield, cost, and quality metrics. Most manufacturers understand the definition of a good batch. From a financial standpoint, the definition is distilled to how well an individual batch contributes to profitability. Better yield means a lower cost per unit. Higher quality should mean a higher price. The measurement reflects either a lower cost or a higher price, and in either case, improved contribution for the great batch and lower contribution for the terrible batch, even a loss.

If we rank all batches from the best to the worst (see figure 1), we see a small percentage at the top that is great and a small percentage at the bottom that are terrible. If we plot the batch P&L contribution for each batch, we get a strong correlation between contribution and the ranking of the batches.

Figure 1

How To Improve

A very meaningful question is, what causes great versus terrible batches? Experienced production people will give us some reasons, as their experience reveals a correlation between what goes into the batch and what comes out. They understand cause and effect. They understand the relationship between individual material characteristics and process variables and their effects on the manufacturing process.

However, many variables determine the ranking of the batch. If we look at the production process as a black box (see figure 2), we can see a series of ingredient inputs (A, B, C, D, etc.), each with a set of quality parameters (ingredient D with quality parameter D1, D2, etc.) For example, if an ingredient is a fruit juice, the quality parameters may be acidity, percent of solids, sugar content, etc.

Figure 2

We also see a series of process variables (temperature, pressure, etc.) that can impact the ranking of the batch.

All of this information produces a great deal of data. For example, for a single batch comprised of ten ingredients, we can have ten or more quality parameters for each ingredient plus perhaps a dozen process variables, resulting in many potential combinations. To understand the relationships, we need to look at a series of batches, tracking and comparing the data from each batch. If we could understand the correlation between the batch ranking and these variables, we could identify the conditions that produce a desirable (or undesirable!) outcome. We could stop making terrible batches and make a lot more great batches (and improve many of the batches in-between.) This results in improved profitability, as shown in figure 3.

Figure 3

Collect, Analyze, Monitor, and Predict

The more information we have, the better our analysis will be. We face several challenges in collecting, analyzing, and monitoring this information to improve the batches.

First, how do we collect the information? The information on individual ingredient lots and end- products are typically available, but not in a format that we can use. They may come from the supplier's Certificate of Analysis (COA), our own test labs or outside testing facilities. This data has been captured, but not for our purposes. Process parameters may be available from data historians, Manufacturing Execution Systems (MES) or other control systems. In both cases, the existing information has to be made available for our analyses. Once we have enough information (the last one hundred batches?) we can correlate our parameters with the ranking of the batch.

Figure 4

Second, we need to analyze the information and look for the correlations between the input parameters and batch ranking. Various statistical processes will yield information that can show us the common elements that can improve the profit potential. What can we find and how can we use that information?

Consider the simple example in figure 4. When we look at parameter 3 of all lots of ingredient D (D3), we see that all lots of were in spec (between the lower and upper limits of the specification). We also see a very high correlation between the actual specification for each lot and the ranking of the corresponding batch (The lower quality batches correlate to the D3 ingredient being on the lower end of the specification). This high correlation means that controlling D3 can increase our production performance. For example, we may want to change the specification on D3 by raising the lower limit. If we tightened the specification, we would increase the number of great batches and eliminate our terrible batches. Since we can make more money with the improved batches, we may want to push our suppliers to adhere to this specification even to the point of increasing the cost of the ingredient. A simulation (which can be done by most enterprise resource planning [ERP] systems) can tell us if paying more for D will actually make more money for us or not.

For the opposite effect, consider figure 5. Looking at ingredient A, specification 5 (A5) shows a different situation. Again, all lots met the specification, but we see no correlation between the ranking of the batches and the A5 test result. Therefore, A5 does not affect the batch ranking. But, while we do see that we consistently used ingredients that met the specification, they were of relatively low quality. Perhaps the specification limit for A should be broadened since it appears not to have an impact on our profit potential. By opening the specification, can we get a lower price? How about improved delivery? If so, we have actually improved the profit potential of the batches without sacrificing quality.

Figure 5

Third, we need to utilize this information to monitor the critical parameters and predict results to avoid terrible batches and to insure ongoing improvements.

Using these parameters, our monitoring activities will help us eliminate terrible batches, produce more great batches, and improve most, if not all batches. For example, monitoring the incoming ingredients can be used to predict problems, of which many include the elimination of material specifications that are not appropriate for the particular finished products. Having this information can help us select ingredient lots that will result in a great batch instead of leaving it to chance. We can use the information to work with vendors to help them improve their performance and, therefore, our performance.

Is this analysis a one-time approach to improvement? No, the approach works on an ongoing basis, contributing to continuous improvements and improved profitability.

Who Can Help?

Our approach to seeking a great batch calls for information on every batch. For most enterprises, this is data that we already collect, but not in a form that serves our purpose. We therefore need to organize the information to serve our purposes. We need to analyze the information to gain fresh insights into our processes, and develop new or revised approaches. To gain long-term benefits from these new insights, we need to monitor incoming materials and production processes on an ongoing basis, using this information to predict and therefore avoid unprofitable situations. A number of software vendors or products can assist in some or all of these tasks, for example

Vendor Data Collection and Organization Analysis Monitor and Predict
ABB X
Activplant X
Advant X
AspenTech X
GE Fanuc X
Gensym X
GSQA X X X
Honeywell X
Optimax X
OSISoft X
Pavillion X
POMS X
PSE X
SAS X X
SPSS X X
Umetrics X
Wonderware X

Summary

Can we make every batch a great batch? No, but we can eliminate the terrible batches and lift the ranking of every batch. The result is a financial gain. We can eliminate the batches that lose money. We can increase the financial contribution of many if not all batches. If we have the information, we need to then apply the tools to make this business gain.

About the Author

Olin Thompson is a principal of Process ERP Partners. He has over twenty-five years experience as an executive in the software industry. Thompson has been called "the Father of Process ERP." He is a frequent author and an award-winning speaker on topics of gaining value from ERP, SCP, e-commerce and the impact of technology on industry. He can be reached at Olin@ProcessERP.com.

 
comments powered by Disqus

Recent Searches
Others A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

©2014 Technology Evaluation Centers Inc. All rights reserved.