Quality Digest      
  HomeSearchSubscribeGuestbookAdvertise December 21, 2024

 

This Month
Home
Articles
ISO 9000 Database
Columnists
Departments
Software
Contact Us
Web Links
Web Links
Web Links
Web Links
Web Links
Need Help?
Web Links
Web Links
Web Links
Web Links
ISO 9000 Database
ISO 9000 Database


Departments: SPC for the Real World

  
   
Davis Balestracci

The Wisdom of David Kerridge--Part 2

It’s dangerous to pretend to be more certain than we are.

 

 

Analytic statistical methods contrast strongly with what’s normally taught in most statistics textbooks, where the problem is described as one of “accepting” or “rejecting” hypotheses. In the real world, we must look for repeatability over many different populations. During the 1920s, Walter A. Shewhart added the new concept of statistical control, which defines repeatability over time--i.e., sampling from a process, rather than a population.

For example, a drug’s effectiveness may depend on the age of the patient, previous treatment, or the stage of the disease being treated. Ideally, we want one treatment that works well in all foreseeable circumstances, but we might not be able to get it. Once we recognize that the aim of a study is to predict, we can see what range of possibilities are most important. We not only design studies to cover a wide range of circumstances, but also to make the “inference gap” as small as possible.

By the inference gap, we mean the gap between the circumstances under which the observations were collected and the situation in which the treatment will be used. This gap must be bridged by assumptions--in this case, based on theoretical medical knowledge--about the importance of the differences.

Suppose that we compare two antibiotics in the treatment of an infection. We conclude that one did better in our tests. How does that help us? Well-planned and well-designed experiments are rarely possible in emergencies, so the gap may be large.

Suppose that all of our testing was done in one hospital in New York in 1993, but we want to use the antibiotic in Africa in 1997. It’s quite possible that the best antibiotic in New York isn’t the same as the best in a refugee camp in Zaire. In New York, the strains of bacteria may be different, and the problems of transport and storage are really different. If the antibiotic is freshly made and stored in efficient refrigerators, it may be excellent. It may not work at all if transported to a camp with poor storage facilities.

Even if the same antibiotic works in both places, how long will it go on working? This will depend on how carefully it’s used and how quickly resistant strains of bacteria build up. There are two sampling issues involved here:

Scenario 1. We often use random sampling in analytic studies, but it’s not the same as that in an enumerative study. For example, let’s take a group of patients who attend a particular clinic and suffer from the same chronic condition. We then choose at random, or in some complicated way involving random numbers, who is to get which treatment. The resulting sample isn’t a random sample of the patients who will be treated in the future at that same clinic. Still less is it a random sample of patients who will be treated in any other clinic.

In fact, the patients who will be treated in the future will depend on choices that we haven’t yet made. Those choices will depend on the results of the study that we’re doing, and on studies by other people carried out in the future.

Scenario 2 . Suppose that we want to know which of two antibiotics is better in treating typhoid. We can’t take a random sample of all the people who will be treated in the future. There’s no readily available “bead box” of people waiting to be sampled because we don’t know who will get typhoid in the future. One has no choice but to use the mathematics of random sampling, but this is a different kind of problem: sampling from an imaginary population. The statistician R.A. Fisher called it “a hypothetical infinite population.”

The practical difference as Fisher saw it is that we must not rely on what happens in any one experiment; we must repeat the experiment under as many different circumstances as we can. If the results under different circumstances are consistent, believe them. If they disagree, think again.

So, with an analytic study, there are two distinct sources of uncertainty:

1. Uncertainty due to sampling, just as in an enumerative study. This can be expressed numerically by standard statistical theory.

2. Uncertainty because we’re predicting what will happen at some time in the future to some group that’s different from our original sample. This uncertainty is “unknown and unknowable.” We rarely know how the results we produce will be used, so all we can do is to warn the potential user of the range of uncertainties that will affect different actions.

 

The latter uncertainty, especially in management circumstances, will usually be an order of magnitude greater than the uncertainty due to sampling.

People want tidy solutions and feel uncomfortable with the “unknown and unknowable.” Of course, we’d rather be certain if we can, and it’s dangerous to pretend to be more certain than we are. The result in most statistics courses has been a theory in which the unmeasured uncertainty has just been ignored.

About the author
Davis Balestracci is a member of the American Society for Quality and past chair of its statistics division. He would love to wake up your conferences with his dynamic style and unique, entertaining insights into the places where process, statistics, and quality meet. Visit his web site at www.dbharmony.com