In the article, “Four Control Chart Myths from Foolish Experts,” by Davis Balestracci (Quality Digest Daily, March 30, 2011) the following comments were made regarding what Balestracci considers statistical process control (SPC) myths:
“Myth No. 4: Three standard deviation limits are too conservative.
Reality: Walter A. Shewhart, the originator of the control chart, deliberately chose three standard deviation limits. He wanted limits wide enough so that people wouldn’t waste time interpreting noise as signals (a Type I error). He also wanted limits narrow enough to detect an important signal that people shouldn't miss (avoiding a Type II error). In years of practice he found, empirically, that three standard deviation limits provided a satisfactory balance between these two mistakes. My experience has borne this out as well.
…
Comments
Well done!
John, nice article. People who understand a little bit (enough to be dangerous) of what Shewhart was saying end up making all sorts of silly statements justified because "Shewhart's limits are economical, not statistical!" He and Deming well understood statistics as the foundation of a decision-making heuristic for the real world, not because inferential statistics are "real" but because it is a process to follow to consistently make economical decisions. This harkens back to Deming's often-misunderstood enumerative vs. analytic dichotomy. You managed to explain Shewhart's reasoning in a way that is clear and cogent. Well done!
Thomas
Thanks for presenting a scientific approach. It seems that some of my colleagues are approaching the subject of process control and improvement dogmatically rather than rationally.
Thomas Pyzdek
www.pyzdekinstitute.com
"It depends"
To my three distinguished colleagues John, Tom, and Steve,
I hereby anoint all three of you to be in the rarefied "1-2%" to whom I alluded in past articles of those who need and -- as you've all proven -- have advanced statistical knowledge. I have NO doubt the three of you could blow me out of the water theory-wise (although I do have an M.S. in statistics).
John's article rightfully applies to many, many manufacturing situations...or even some research situations. There truly is a need for this knowledge...in specific situations where an expert is needed. This is the type of stuff I did the first 10 years of my career as an industrial statistician....and very few people bothered to listen. I'm even wondering how many QD readers will listen.
My QD articles mainly address the plague of "statistical training of the masses" caused by Six Sigma for applications to business processes and service industries -- and I have no doubt that all three of you would blow those audiences out of the water, too. For what purpose?
As Deming wrote to Gerry Hahn (very distinguished applied statistician) in a personal correpondence shown to me in 1984:
"Sorry about your misunderstanding...TOTAL! When will statisticians wake up?"
But, hey...if I need a Fisher's information calculated, you'll be the first ones I'll call.
Who's "right"? We're BOTH right--"It depends!"
I have a different position
(Sorry about the bullets - all the paragraphs get stuck together making an unreadable wall of text. This way at least it is an unreadable FORMATTED wall of text...)
I guess we agree to disagree
--"You cannot hear what you do not understand."
--"Information is not knowledge. Let's not confuse the two."
--"We know what we told him but we don't know what he heard."
[Taken from "The Best of Deming" by Ron McCoy]
Davis' Experiences Are Different Than Mine
Sampling costs vs. failure costs
I recall learning a procedure to calculate the total cost of quality for acceptance sampling, with the following components: (1) Cost of inspection or testing, (2) Cost to replace nonconforming items caught by the inspection (internal failure), and (3) Cost of failure in the customer's hands (external failure). The external failure cost of something like a pacemaker is obviously unacceptable so 100 percent testing is indicated. Performance of this kind of calculation for SPC requires accurate knowledge of both the false alarm risk and beta risk (cost of reacting to a nonexistent problem vs. cost of allowing the process to drift out of control), which in turn supports the need to model non-normal processes with non-normal distributions.
A Non-Experts Perspective
John, thank you for an excellent post. I think that it is important for both practitioners and managers to recognize that SPC should be about economical control of processes; not just statistical control.
/
However, in ten years of using and studying SPC in several companies, I have never once had the luxury of being able to determine an estimate of the economics of alpha, or the economic consequences of Type I or Type II error. I've worked on colleagues and managers to try to get them to make estimates, and what I find is that (a) the data doesn't exist and (b) no one wants to take the time to guesstimate. Indeed, I have seen people use tightened control limits not based on economical analysis but because they wanted to reduce variation using 100% inspection and were using the natural process limits to set tolerance limits (I'll let the read count how many opportunities for improvement are implied by that statement).
/
I have, unquestionably, been living in target-rich environments, having spent most of my time in R,D&E with smaller companies where statistical techniques and SPC are new, and any application of statistical methods represents a vast improvement over previous conditions. However, I have seen this same problem with much larger and more established companies, too. This is clearly the reason that the Shainin Red-X (or GM's "Statistical Engineering") are popular: most people are not in a position to estimate basic statistical factors, let alone economical ones. At the same time, any standard factor, or even a formulaic approach that pre-calculates all statistical factors, is better than whatever was being done previously.
/
In short, I think that John's article (and the references) is extremely valuable. You can be sure that I will be saving this article and obtaining the references. Based on my experience and observations, Davis' points are entirely applicable and appropriate to a very broad audience of practitioners, who, for better or worse, are not in environments mature enough to benefit from John's more refined approach.
BRAVO! Tom
Very, VERY astute comments -- and, unfortunately, my industrial experience as well.
Of course, us folks who love statistics are all-too-eager to apply the (needed) "advanced" stuff. But, as Tom says, it falls on deaf ears.
As I've broadened my practice to educate people more about "variation," I may be using the "simple" stuff, but, you know what? -- I've never had more fun...or been more effective.
Well done!
Once again, guys, we're BOTH right.
Kind regards,
Davis
A good read indeed!
John:
While I question your source of passion (blasting other articles), I always find them to be a good read … this one being up at the top.
You have potential to develop the next generation of SPC … models, methods, and tools, not just models. Cost / benefit, judging with confidence, flexible sampling and limits based on need, normalcy, autocorrelation, shifts, drifts, and rules that catch things that Western Electric rules don’t catch should all be easily considered as you seem to suggest. I suggest you connect with a stat software company (Dr. Neil Polhemus at Stat Graphics?) to take the common SPC offerings that they all tend to offer and turn it into next gen SPC.
Thanks for giving me another article for my “SPC WOW articles” file.
KN – www.KimNiles.com
Add new comment