Fourteen years ago, I published “Do You Have Leptokurtophobia?” Based on the reaction to that column, the message was needed. In this column, I would like to explain the symptoms of leptokurtophobia and the cure for this pandemic affliction.
ADVERTISEMENT |
Leptokurtosis is a Greek word that literally means “thin mound.” It was used to describe those probability models that, near the mean, have a more rounded (or peaked) probability density function than that of a normal distribution. Due to the mathematics involved, a leptokurtic probability model has more area near the mean than a normal model, and tails that are more attenuated than those of a normal distribution.
The first part of this characterization means that leptokurtic models will always have more than 90% within 1.65 standard deviations of the mean and less than 10% outside this interval. The second part means that while leptokurtic models will have less area in the outer tails than a normal model, a few parts per thousand (but never more than two dozen parts per thousand) may be found more than three standard deviations away from the mean.
…
Comments
Leptokutophobia
We jump to solutions before we attempt to understand. The first true statement is that, given no modification to the process, with all variables remaining "as is," is likely to take between 25 minutes and 2 hours. If that range is unacceptable, the next step is NOT to modify the data, it is to detail the process.
There are obviously process variables that are not being controlled adequately. Performing a quick outlier test (Q1-IQ*1.5, Q3+IQ*1.5) shows any instance less than 2.5 minutes or greater than 102.5 minutes can be considered outliers. There is a cluster of data around the 102.5-minute outlier, indicating that is a separate, smaller cluster from the data on the left of the chart. Defining differences in process from the outlying cluster and the main cluster can begin to narrow the window further. It isn’t necessary that the data all be homogenous, simply that it be collected consistently. Only one the PROCESS is homogenous, will the data fall in line.
Not a phobia
Excellent as always. However, while the ignorance of Dr Shewhart's brilliance today is greater than ever, I feel it is not a "phobia". The ignorance of the fact that Process Behavior Charts work on any data, without data torture, is promulgated by perpetrators of the Six Sigma Scam. Poppycock is promoted for profit.
For example, how could ASQ toss away $35,000 per gullible Six Sigma victim by making Quality as simple as it should be? Dr Wheeler has proven that a simple XmR chart, with a single rule (points outside limits), is all you need for any sort of variable or count data. It is too simple to be profitable for con men.
Whenever teachers lack understanding......
"Whenever teachers lack understanding, superstitious nonsense is inevitable."
Truer words have never been said! Just yesterday, my daughter sent me her three-year-old son's progress report from preschool. She is saddened by the fact the label applied to her son is the same label that was pinned on her years ago!
"He is quiet and we work with him to promote verbal interaction."
The ratings are Satisfactory, Progressing, or Needs Improvement and there are 52 categories kids are judged on. The big question is can we fix the child's quietness or is it "normal." Superstition believes we can fix it. Dr. Deming believed in abolishing grades and gold stars in school. He noted, "Judging people does not help them." As a professor, Dr. Deming graded himself. Where was he failing? How could he improve his teaching?
Sorry if a little off-topic.
Thank you for another great article!
Allen
Normal distribution its absence and cause for thought
Another wonderful paper by Don, thank you.
Control charts adhering to normality is something I did not give much thought to, coming from the school that I do not want to transform my data.
Historically I can see if a process changes based on the eight (or seven) locations that are either below or above the centerline, and that "I have a decision to make". The pattern rule, does it look "nonrandom" is also quite helpful; occasionally, I'm not fortunate enough to get eight points from a single process; instead, I may receive two points in every ten batches. This low number of batches is still sufficient enough to skew the histogram and produce control limits that are not that effective, especially when lots of data is reviewed in one go.
It comes down to looking closely at the data and importantly using rationale subgrouping stratergies that Don has previously written about extensively. This paper has gently reminded me that in life noramlity is not that constant and that if normal distributions are not present and my control limits do not look that effective, I should perhaps look at the data a bit more carefully, instead of blindly following software generated control limits. Cheers Ian
Add new comment