Body
This is the final column in the debate between Donald Wheeler and Forrest Breyfogle on whether or not to transform data prior to analysis. Because the debate started with Wheeler's article "Do You Have Leptokurtophobia?" we are letting him have the last word on the topic.
The articles following Wheeler's first story were:
Breyfogle: “Non-normal data: To Transform or Not to Transform”
Wheeler: “Transforming the Data Can Be Fatal to Your Analysis”
Breyfogle: “NOT Transforming the Data Can Be Fatal to Your Analysis”
--Editor
…
Want to continue?
Log in or create a FREE account.
By logging in you agree to receive communication from Quality Digest.
Privacy Policy.
Comments
Oh, stop already!
This whole back and forth has turned into a cat fight, and nothing more. While it was amusing for awhile, it has grown tedious and I'd like to read about something other than two supposed professionals acting very unprofessionally, and refusing to hear the other person's view.
I find the exchange useful
I find the exchange useful and enlightening. Calling two well-known professionals unprofessional is.. well... unprofessional. Dr. Wheeler's track record on the practical use of SPC is well-proven. I have used his teachings for many years with great results. "The proof is in the pudding"!!!
Steven J. Moore
Dir. Quality Improvement Systems
Wausau Paper Corp.
To transform or not
Thanks for the good articles. It only took me around 20 years into my career to realize the futility of transforming one's data. I'm glad a lot of my clients are patient.
IMHO this debate has brought
IMHO this debate has brought out the best in both experts...and has helped us all out. In particular, Wheeler's systematic summary of his various articles over the years is VERY helpful, not only as a handy reference, but also as a storyline of how they all meld together to help folks like us actually understand process behavior charts, and implement them.
A suggestion for another article.. when to use a run chart vs. when to use a process behavior chart? e.g., can we still use a run chart to analyze data that violate the rational subgrouping requirement of the process behavior chart? when can we not use either? ..and how about a table of common metrics that are analyzed, and which of those violate the rational subgrouping requirement?
i think a well defined article on that subject would enlighten many..
Excellent Discussion
I thoroughly enjoyed this back and forth discussion on what I consider essential statistics for manufacturing. I laughed, I learned, and I looked forward to the next article. Dr. Wheeler has done a fantastic job of raising the level of understanding of applied statistics to the work place. There are so many low-hanging fruits to be harvested in the manufacturing orchard by utilizing the simple and unconfounded methods he has taught over the years. The business world is complicated enough with its uncertainties. To be able to assess and address variation in a simple and cost-effective manner is key to surviving. Thanks for the great summary of past articles to help highlight the path through the trees.
Strategy
There is no doubt that the 'eye' is a powerful analysis tool for graphical pictures, in that very subtle changes can be discerned that would be lost in a table of numbers. The crux of the issue as I see it comes from trying to figure out how to convey to execs adequately discrimination between routine variations and alarm situations irrespective of the transform question.
Breyfogle's data in his 'Figure 1' in the prior article does indeed contain a mix of change types, lines, as well as shifts, and begs the question of aggregation and transform. But this is a very real situation. Somewhere along the line, these data have to be rolled-up to the executives (in the proverbial report-card approach). What strategy can I use to to state GOOD/BAD in as few slides as possible to execs.
--- Breyfogle proposed a single, aggregated chart to tell the Exec that all is "Good/Bad" without invoking handing 50 charts as 'background' OR reducing to a 'Red or Green' light. Further, I think he was trying to deal with identifying normal variations to keep the Red/Green lights switching too often from false alarms. Breyfogle's main point (as I saw it) is that suppose AFTER all the detailed analysis of each of the lines/changes/shifts -- all the charts indicate that indeed ALL the processes are stable and in control. Breyfogle suggests developing an aggregate chart to represent a single, rolled up view of control. This simple behavior chart portrays the 1000 word snapshot of the process and conveys a more complete picture. Hence, the aggregate is intended to show a better view of normal variations that many can relate to -- rather than Red/Yellow/Green colors which provide no depth.
Unfortunately, the subtle differences due to different mfg lines/change types, etc., did cause an inadvertant 'lognormal' relationship and distort the chart. (Lognormal can be expected to cause this if the underlying processes with wide means and variances are merged). So, Breyfolge chose to transform to convey stability, and hence help convey that indeed the variations are from normal random fluctuations.
What you lose are the known problems: Transforms will distort the data; does the transform chart of the merged data (now with the "appearance" of control) really mean that the processes are in control; how much of a change will be required to see that an action has to be taken (how sensitive to alarm conditions); are we missing problems, and that subtle information is lost that is usually seen in the raw data. BUT he proposes that the DIR/VP exec has a short 'summary' that says that all is GOOD or BAD.
--- Wheeler proposes to look at the raw data, and use a simple nonparametric 3 sigma rule for control. However, this has a known issue rate of 2-3%. Should we tell operations that they will be responding to 2-3% non-issue alarms due to the '3 sigma' rule. Touting that <5% is robust may not answer the question. He just needs to build that into his cost model. Secondly, the question is strategy: should this 'rule' be applied to any chart of data -- including an aggregated chart (? - how does Wheeler propose we summarize to execs?) Again, would this rule also result in a 2-3% (potential) false alarm rate that becomes visible at the exec level? I wonder what they would say.
Wheeler does point out adequately all the shortcomings of transform and chart, etc. No doubt there. But I wonder if he would share (or link us to a prior article we missed) a strategy in how he took this mess of detail charts and aggregated to a good, visual summary which told all "GOOD/BAD" -- without an inherent 2-3% risk. Perhaps Wheeler can clarify this aspect in a 'Comment' to this paper.
Understanding Variation
In Dr. Wheeler's book "Understanding Variation-The Key to Managing Chaos" he takes a "typical" management report that looks for large percent differences (as indications of special causes) and re-analyzes these data in context using XmR charts. He demostrates that by using this method, the real message behind the data can be discovered and how mis-guilded traditional management reports are.
I don't thing rolling-up operational data onto a control chart can solve the inherent problem of mixing apples with oranges. Let's not present things so easy for management that it loses credibility.
Rich
It's an issue of X or Y. Are they both important?
Y=f(X); i.e., the output of a process is the function of its inputs and the process itself. Wheeler has been focusing on the SPC tracking of the X’s, while in my articles, I have been focusing on the Y’s. In these articles, I also provided a viable way to track so called "report-card" data sets from high level performance areas that aggregate many processes. To ignore this type of situation seems to be short sighted relative to true business needs. Wheeler appears not to see the value in evaluating this type of data. I wonder if his errant conclusion that you will never see a signal in that type of data comes from opinion and hope without a lot of actual experience. Also, what about process capability statements for a process’ "Y", where common-cause variability is unsatisfactory relative to customer needs? I do not understand why Wheeler did not address this important point that I made in his articles. Wheeler is an SPC icon, but I am not talking about SPC, but about BPC or business performance charting. In my articles, I used both randomly generated and real data sets, which were picked apart but still fundamentally describe how individuals chart on aggregated data can be a part of business’ decision making process. I have seen these tools identify clear business process signals when used on the aggregation of data – it works. These tools can be used to examine the business as a whole to identify where data drill down can provide additional understanding for improvement efforts so that the business as a whole benefits; i.e., goes beyond Lean Six Sigma and the Balanced Scorecard.
Understandig of variation
Key point is that the conclusions drawn from the raw data x usin SPC must be consistant with those from f(x) (BPC).
Question is whether it is possible to achieve the same information from the Variation of x and the Variation of f(x)?
From the foregoing articles it can be concluded that by transforming the data to f(x) quite some information about raw data x-Variation can be lost so finally coming to a consistant Variation assessment between x and f(x) will be difficult if not impossible...
Also, as correct sampling is for statistics, a correct rational subgouping is of crucial (!!) importance for a correct SPC raw data x-analysis. I fully agree with a comment below that in literature more focus should be given to rules for correct rational subgrouping. SPC is impossible without a correct estimation of the background noise;with incorrect noise estimation theSPC charts will yield wrong signals as is the case with bad sampling in statistics..
Rational subgrouping is not an easy excersise requiring quite some experience, excellent process knowledge, cross-functional input and repeated in-depth discussions with the process improvement team.
There are two types of error
There are two types of error - Alpha and Beta. Alpha error is treating common cause variation and reacting as if it were special cause. This leads to "tampering" with the system. Beta error is not responding to special cause variation as if it were common cause. YOU CANNOT MINIMIZE BOTH TYPES OF ERROR. As you minimize one error type, the other increases. A good analogy is our system of criminal justice. We want to minimize the chance of convicting innocent people, so the laws are written to protect them. As a result, the chance of not convicting guilty people is increased. You cannot minimize both.
The bottom line: A 2-3% false signal (Alpha error) risk is very acceptable.
S. Moore
Sorting out false alarms
By further sorting out false signals, operations should not necessarily be be bothered with this 2-3% false alarm rate.
Every time a new data point is showing up on the control chart, signals should be assessed by a cross-functional team (technical department, operations, engineering..). This way false alarms are filterded and strongly reduced < 1%.
It is better to live with some 1% false alarms than having a > 5% chance for rejecting real signals as at the end, like a boomerang they will come back to you...
Add new comment