ADVERTISEMENT |
When your process outcomes are not what you expect them to be it is common to adjust the process. This is not always appropriate. To understand when adjustments are appropriate, and when they are inappropriate, we will need to learn how to distinguish between the noise contained within the data and a signal that an adjustment is needed. Both the problem and the solution will be illustrated by an example from my own experience.
ADVERTISEMENT |
Plant A
Plant A was operated using the values delivered by the in-house lab. The lab director wanted to deliver good values, so he faithfully followed the recommendation from the manufacturer of one of his key analytical instruments and recalibrated that instrument every day. This recalibration used a known industrial standard.
…
Comments
As Seen In Action
Your column described a phenomenon I observed (with shock) at one of my previous employers. They used a pH-conductivity meter to make measurements. Everyday, before the first production measurement, they would calibrate the meter with a known standard. Because the manufacturer recommended this procedure, and because it had been done for as long as anyone could remember, my efforts to educate them on the wrongness of their approach fell flat.
Lesson 1: Habits are difficult to overcome.
Lesson 2: Manufacturer's recommendations aren't always right.
Even though I no longer work there, I have no doubt that the practice of calibrating the meter everyday still goes on.
(Shrikant Kalegaonkar, twitter: @shrikale, LinkedIn: http://www.linkedin.com/in/shrikale/)
Institutionalized Rule 2
Great article, great example. This practice is just unabashed, institutionalized Rule 2 of the Funnel. I agree that it can be difficult for many organizations to accept, because it's "just common sense" that if one number is different from another number, something must have caused it to be different. That kind of mechanistic, deterministic mindset drives a lot of bad behavior.
I used to enjoy teaching this concept to Marines, when I was teaching SPC in the Department of the Navy. Marines just "got it," almost immediately, because they knew that you don't adjust your sights based on the last shot fired, you adjust based on the group. Once you get the group centered on target, you leave it alone.
Plant B always wins
Another excellent article from Dr. Wheeler. Our company experiences almost the same situation. We have an analytical instrument that measures a property of our product that is critical to quality. Many years ago the instrument was re-calibrated on a time based frequency - whether it needed it or not. We adopted the Plant B approach and ran weekly calibration checks with a known standard and plotted the results on a XmR chart. We found that the instrument retains its calibration for much longer than anticipated. When a calibration check resulted in a signal, we first inspected the instrument and standard for special causes. In many cases, the instrument was simply contaminated or the standard was damaged. Correcting these problems and rerunning a calibration check often showed that the instrument was indeed still in calibration.
Our customers noticed the improved consistency in our product which helped us increase market share.
simple question
I am not certain where 0.675 is coming from in the probable error calculcation for one point. Can someone address this? Thanks!
0.675
It's in a number of Dr. Wheeler's books. The simplest explanation is that it's the middle 50% of the normal distribution. Errors will exceed 0.675*s (or "sigma hat") about half the time; in other words, will fall within xbar plus or minus 0.675 sigma hat about half the time. More detailed explanations, and the rationale for using Probable Error, may be found in Understanding Statistical Process Control (3rd Ed.), EMP III Using Imperfect Data, or Evaluating the Measurement Process.
Add new comment