Body
From the perspective of data analysis, rare events are problematic. Until we have an event, there is nothing to count, and as a result many of our time periods will end up with zero counts. Since zero counts contain no real information, we need to consider alternatives to counting the rare events. This article will consider simple and complex ways of working with rare events.
…
Want to continue?
Log in or create a FREE account.
By logging in you agree to receive communication from Quality Digest.
Privacy Policy.
Comments
Another Classic from the Master
Andrew Torchia Principal Quality Consultant www.qaexp.com
Once again Dr. Wheeler condenses a chapter's worth material into a single column that is highly educational yet easy to read.
A classic?
Looking at the graph of infection rates, my instincts told me that the process was stable as nothing about it looked nonrandom. So I did some fact-checking, and the data certainly appears to fit the distributional assumption of geometric (and distribution fit is important for G and T Charts). So then I took Dr. Wheeler's control limits, and calculated the odds of randomd data from that geometric distribution falling outside of the control limits - the result: 18.3%. 18.3%! That's your false alarm rate. He then circles 20% of the points and claims there is a lack of control.
If you think this is "a classic" then have fun chasing false alarms all day. Apparently no research at all was done on the properties of the method he is proposing and as a statistician, I am deeply concerned about anyone reading this and blindly believing this is a better method. As this method additionally requires calculations on your data, I do not see how it is easier either.
Further, the methods described for how standard G and T Charts are constructed are not common methods, and chances are very good that your statistical package uses a much better method and does not require the use of two different charts to detect shifts up and down.
If you have this type of data, check to see if it meets the distribution assumption of geometric or exponential (just a histogram should be a good enough check). If it does, use the standard G or T Chart that Minitab or other packages provide. If it does not then it is most likely more symmetric, in which case an I-MR Chart would work fine although you should ensure you have a posisitive LCL.
Use the method described in this article at your own peril.
Rare Events and Safety Statistics
I have been using the methodology described by Dr. Wheeler to analyze Safety data (OSHA Recordable Injury Rates, in particular) for several years with good success in seeing the signals which are otherwise difficult to detect.
Great article
I actually met Don Wheeler at an ASQ annual conference in 1996. I went to his booth asking the question of what to do about rare events. His answer was "Buy this book" (Understanding Variation The Key to Managing Chaos) and even got his autograph (though I am sad to report someone at some point borrowed the book, and I don't have it on hand).
I've used the rate between events chart described in the beginning of the article with good success. It does get a bit hard to explain to management, as the average of the rates ends up being skewed higher than might be expected by the layman. I am convinced this "skewing" is good for the analysis.
The charts do tend to (in my empirical experience) be a little more susceptible to false alarms than others, but in the safety arena I'd rather have a few more false alarms.
One suggestion I have, which removes the past looking "report card" feature is to plot a "ghost point" for today when the chart is updated. That is - if an event were to happen 1 minute after I updated the chart, what would it look like? That gives the context of how long have we gone since the last event. Stretching things a little, I've even converted that to the exponential probability of what is the likelihood that we could have gone that long given the current average rate that is plotted. That hasn't been as beneficial especially in layman conversations and may be questionable statistically, but might be worth considering.
Thanks,
Reply to Jstats
The question addressed by any process behavior chart is more basic than "What is the shape of the histogram?" or "What is the probability model?" It has to do with whether we can meaningfully use any probability model with our data. In starting with distributional questions you beg the question of homogeneity which the chart was created to address. This mistake was made by E. S. Pearson in 1935, and has been repeated many times since. You need to reread Shewhart's two books more carefully.
Distributions
Don - thank you for your reply to my comments. This seems to be a chicken and egg situation. If you assume data to be homogenous, then you may only detect a future issue that has deviated from the current homogenous state. However, if the data were not homogenous to begin with, then this assumption may be masking signals. So the best decision would seem to be to use some other knowledge to consider whether your assumption is reasonable or not.
In this case I would use two pieces of knowledge. The first is that a Poisson process is known to have "opportunities between occurences" to be geometrically distributed. In this case it may be reasonable to believe this is a Poisson process but we probably want other evidence as well. Which would bring me to the second piece of knowledge which is the graph itself...removing centerline and control limits, the data simply look random. As a practitioner, I would have serious doubts about the points labelled as out-of-control actually having special causes and would not waste time pursuing special causes for 20% of my points.
I have certainly read Shewhart's books and articles and understand he did not want control charts to be interpreted as fitted specific probabilities associated with the normal (or any other) distribution. However, among all of the "example" datasets he uses, there is never a sample that comes out as skewed as data from time between events typically does. They are all roughly "moundish". I don't think considering data to have potentially come from a highly skewed distribution is rejecting Shewhart's ideas so much as evaluating an area in which Shewhart did not publish work.
In determing the best way to plot data from what we have good reason to believe is a highly-skewed distribution, the exponential or geometric distributions may not be perfect but they provide more reasonable limits than Shewhart's methods and only require one chart. I find this to be much more useful and to provide more balanced and reasonable false alarm rates.
Also, I'm interested in investigating the methods you described for G- and T-Charts as they are not the ones I am familiar with. Is there a reference I can consult for more information? Your help would be greatly appreciated!
Add new comment