Quality Digest      
  HomeSearchSubscribeGuestbookAdvertise December 21, 2024

 

This Month
Home
Articles
ISO 9000 Database
Columnists
Departments
Software
Contact Us
Web Links
Web Links
Web Links
Web Links
Web Links
Need Help?
Web Links
Web Links
Web Links
Web Links
ISO 9000 Database
ISO 9000 Database


Departments: SPC for the Real World

  
   
Davis Balestracci

Statistics Don’t “Prove” Anything...

… but PARC analysis generally does

 


When will academia (and “belts” and “consultants”) wake up and realize that, try as they might with their “captive” audiences from whom they are extracting their revenge for being beaten up on the playground 25 years ago, most people (sometimes including the aforementioned teachers) will not correctly use the statistics they’re taught? People don’t need statistics… they need to solve their problems.

The famous applied-science statistician, J. Stuart Hunter, invented the term PARC to characterize a lot of what is being taught: “Practical accumulated records compilation” on which one does a “passive analysis by regressions and correlations” and, to get it published, since the data have been tortured until they’ve confessed, there is now the “planning after the research is already completed.” With the current plethora of friendly computer packages that have delighted their customers, I have also coined the characterization “profound analysis relying on computers.”

Figure 1 displays four plots from famous data sets developed by F. J. Anscombe. They all have the identical regression equation and analysis (all summary statistics).

There is a brilliant unpublished article titled “Statistics and Reality” by the Deming scholar and Scots academic, David Kerridge, in which he quotes Walter A. Shewhart from his classic book The Economic Control of Quality Manufactured Product (D. Van Nostrand Co. Inc., 1931):

“You go to your tailor for a suit of clothes and the first thing that he does is to make some measurements; you go to your physician because you are ill and the first thing he does is to make some measurements. The objects of making measurements in these two cases are different. They typify the two general objects of making measurements. They are:

To obtain quantitative information

To obtain a causal explanation of observed phenomena”

 

These are two entirely different purposes. For example, when I’m being fitted for a suit, I don’t expect my tailor to take my waist measurement then ask, “So… does your mother have diabetes?” The tailor doesn’t care about the genetic process that produced my body—he or she measures it and makes my suit.

Here is an example that I remember vividly from a newspaper article more than 10 years ago. I was so angry that my newspaper was shaking (my daughter could notice it from across the room). This is the kind of (alleged) research and attitude that I’m trying to debunk. The article was titled “Whites May Sway TV Ratings,” and it read:

“… [An] associate professor and Chicago-based economist reviewed TV ratings of 259 basketball games….They attempted to factor out all other variables such as the win-loss records of teams and the times games were aired…. The economists concluded that every additional 10 minutes of playing time by a white player increases a team’s local ratings by, on average, 5,800 homes.”

I’m guessing that this is a spurious result similar to the lower right-hand graph in the figure. Also, did you know that there are three diagnostics that need to be done with any regression… even if it has a good R-squared and statistically significant t-test?

The objective of TV ratings is to find out how many people watched a particular show (i.e., “making a suit”). Is Nielsen trying to determine racial viewing patterns of basketball games (i.e., causal explanation)? I didn’t think so. Interestingly enough, how would one design such a study if one wanted to test this theory suggested by the regression? (Answer: Not very easily.)

When “data for a suit” (i.e., most tabulated statistics) are used to make a causal inference, that’s asking for trouble. I highly recommend J. L. Mills’ article in the Oct. 14, 1993, issue of The New England Journal of Medicine, titled “Data Torturing.” This is an insightful explanation of why a lot of published research is, in essence, PARC spelled backwards (which was Hunter’s point).

When are academics (and all the “black belts” and “consultants”) going to stop torturing their students as well?

I’ll be sharing more of David Kerridge’s ideas in my next column.

About the author
Davis Balestracci is a member of the American Society for Quality and past chair of its statistics division. He would love to wake up your conferences with his dynamic style and unique, entertaining insights into the places where process, statistics, and quality meet. Visit his Web site at www.dbharmony.com.