I hate statistical process control. What's the point of it, anyway? To create a situation where nothing really changes. Where one subgroup is pretty much the same as all the others. Frankly, it's boring.
It isn't boring at first. When an SPC procedure begins, the process under scrutiny usually is all over the place. A team will experiment by holding all variables constant and measuring consecutive units or batches. They'll find tremendous variation, not have the vaguest idea why and thoroughly enjoy themselves. They'll think: We identified everything that was important, didn't we? And held them constant, right? Did something change anyway? Are we certain we really know how to measure this process? Their brains whirl at the possibilities.
Then, gradually, they pin down one special cause after another. The process improves, then stabilizes. One day management decides that it's good enough. The team is disbanded. The process is "in control." They know what the next subgroup will measure. And the next. And the next …
A controlled process reminds me of death. A living thing changes constantly. No two days are the same. Even a plant faces a different reality each day. But when a living thing dies, it stops changing.
A process in control is like a patient who has flat-lined. Medically speaking, the relevant measurement of health shows no sign of life. The brain's electrical activity is nil. The measurements remain stable and predictable, like the control chart of a process in control.
But flat-lined patients sometimes can be shocked back into life. Administering an electrical jolt in just the right way can cause the body to begin functioning again. The heart starts beating, the brain starts working, the patient begins to live once more. Change occurs.
That's what I like to do with controlled processes. When I know what keeps them boringly stable, I want to see what happens if I kick them around a bit. Let's increase X and tweak Y a little. Things are fun once again, only now I'm in control of the changes and not groping in complete darkness. When using simple control charts to analyze these experiments' results, I call the approach statistical process improvement (SPI) to distinguish it from SPC.
Often, people mistakenly believe that the only proper way to analyze the effect of changing a process is by statistically designed experiments. In fact, this misconception unnecessarily restricts experimenting because DOE is relatively complicated, and in most organizations only a few people have the necessary skills to decipher the arcane language and esoteric results. However, much progress can be made using less sophisticated tools such as ordinary charts and graphs. After all, DOE wasn't invented until well into the 20th century, but that didn't keep Archimedes or Galileo from making discoveries.
Let's say you're a process operator, and you think changing variables a and b might improve the process. You determine that both variables will run at "high" and "low" levels, and you decide to run 25 parts with a and b low (LL), which is the "normal" level. Then you run 25 more parts with a high and b low (HL), then 25 with a low and b high (LH), and finally 25 with a and b both high (HH). The results of such an experiment are shown on the X and sigma chart in Figure 1.
The sigma chart indicates that the dispersion of the process remained stable despite the changes made to a and b. However, the process average changed, as shown on the averages chart. Running a high and b low increased the average by about five or six compared to normal. Running b low and a high decreased the average by five or six. Finally, when both a and b are set to their high levels, the process operates about the same as normal.
What to make of these results depends largely on economics and engineering considerations. What will it cost to set a and b to the different levels? Do we want results higher or lower than normal? If we are shooting for maximum stability at a target of 100, then the results tell us we can run either LL or HH and get the desired result.
In other words, although we still don't know how to decrease variability, we learned a lot about this process by conducting the experiment. And we didn't need anything fancier than control charts to do it.
Purists might point out that this is a sloppy way to conduct an experiment. I agree. However, it's better than trying nothing at all. Often, companies lack the trained and qualified technical personnel to conduct properly designed experiments. My objective is to conduct many experiments, shooting for quantity rather than quality.
An especially relevant criticism would note that the order in which the experimental units are run isn't random. This means that any factor(s) that change in conjunction with the known changes could lead to the conclusion that the deliberate change's effect is real, when it isn't.
We can overcome this confusion by operating the process at the suggested settings over a period of time. Either the changes work and confirm our findings, or they don't. Another alternative is to conduct a more rigorous DOE to verify the suspected effect. In this case, SPI is used to complement DOE.
Good enough is never enough. We should use SPC to help us find ways to make good processes better.
About the author
Thomas Pyzdek is president and CEO of Pyzdek Management Inc. He has written hundreds of articles and papers on quality topics and has authored 13 books, including The Complete Guide to the CQM.
Comments can be e-mailed to Pyzdek at Tom Pyzdek . |