Real Common Cause
While teaching a class, I asked participants to create a control chart of an indicator that was important to them. A lab supervisor presented me with the chart below on the number of procedures her department performed and told me that it wasn't very useful. She wanted to use it for staffing purposes, and the wide limits were discouraging to the point of being ridiculous.
I know that a lot of you have access to software with the nine Western Electric special cause tests programmed in. In this case, none of them were triggered. Therefore, at least so far, it's a common-cause process. In addition, the data "pass" a normality test with a p-value of 0.092. Lots of statistics. So the subsequent action is…?
In situations like these, we can lose credibility as quality professionals if we're not careful. People just roll their eyes at the wide limits as we try to explain that their process is perfectly designed to have this variation, and that they're going to have to tell management they need a new process. At which point our listeners roll their eyes again and say, "Thanks," while muttering under their breath, "--for nothing!"
As I mentioned in last month's column ("It's Time to Ignore the Traffic Lights," July 2005), there are three common-cause strategies in addition to special-cause strategies: process stratification, process dissection and experimentation. They should be approached in that order. The typical response, however, is to jump right to experimentation (i.e., "change the process"). I shall begin by addressing stratification.
Sorry to act like a broken record, but has anyone thought of asking, "How were these data collected?"
A lot of Six Sigma training emphasizes the need for more frequent data collection on processes being studied. I agree. But this can cause additional problems in interpretation if one isn't careful.
When I asked the lab supervisor at my class how these numbers were obtained, she told me that they represented five weeks of daily procedure counts, and that the lab was closed on weekends. So, it was five weeks of Monday-through-Friday data. Does that insight help?
I shall talk about another version of stratification next month, but in this case, couldn't one stratify--i.e., separate--the data by day of the week? Does the chart below provide the insight necessary?
I've found, especially in cases like this, that a stratified histogram comparing the values by the days of the week can be quite useful.
Thus, there's a "hidden" special cause by day of the week in these data: Mondays tend to be high, Fridays tend to be low and Tuesdays through Thursdays are in the middle. This special cause has rendered the initial control chart and normality test invalid.
How does telling the staff to anticipate 11 to 116 procedures every day compare to telling them that Monday will have 50 to 103 procedures, Tuesday through Thursday will have 40 to 80 and Friday will have 23 to 63?
Understanding the underlying sources of variation by asking how the data were collected allows us to make better predictions. As Donald Wheeler said often in this very column, "The purpose of charts isn't to have them but to use them!"
Any good statistical analysis will always lead to the next question. The best advice I can give for any quality improvement effort is to be relentless in understanding the variation in a situation, which leads us to ask, "What can I ultimately predict?"
Davis Balestracci is a member of the American Society for Quality and the Association for Quality and Participation. He previously served as chair of the statistics division of ASQ. His book, Quality Improvement: Practical Applications for Medical Group Practice (Center for Research in Ambulatory Health Care Administration, 1994), is in its second edition. Visit his Web site at www.dbharmony.com.
|