Time to Declare War On Surveys--Part 1
I once did some work for a medical center that had, at every one of its 20-plus clinics, a table with a sign stating, “Tell Us How Your Visit Was.” As patients left the clinic, if they chose, they could take the opportunity to rank the medical center on a one-to-five (worst to best) basis in nine categories.
The cards were collected at the end of each month, sent to an outside firm for objective processing (a serious expense), and the monthly stats were then given to the clinic administrators, who were held accountable for the results. Any clinic with a 3-percent or larger drop from the previous month had to explain why and subsequently submit an action plan for improving the score the next month.
Figure 1 shows one clinic’s “overall satisfaction” results for 19 months, and figure 2 shows the resulting control chart, which the outside firm didn’t supply. It only compared the current month with the previous month and looked for “short-term trends.”
Using standard control charts calculations (with median moving range), one month could differ from its preceding month by as much as 0.48, or 11 percent of the mean. Notice that all data lie within the limits of 3.92 to 4.71. This is a stable process--for people who choose to fill out surveys... which is what percent of patients?
Think about it. Why should anything have changed? The current process treats common cause as special cause, i.e., processing a biased sample of data and exhorting people to improve based on arbitrary interpretations.
Figure 3 shows a control chart of the percentage of month-to-month change. Notice the average. What if management felt that things shouldn’t change more than 5 to 10 percent?
This is a totally worthless effort that is rampant.
The title of one of my favorite survey articles says it all: “Are your surveys only suitable for wrapping fish?” (K. Miller, Quality Progress, December 1998). The author concludes that many times:
• The wrong people are surveyed.
• The wrong questions are asked.
• The questions are asked the wrong way.
• The questions are asked at the wrong time.
• Satisfaction and dissatisfaction are assumed to be equally important.
• Those who didn’t buy or use the product or service aren’t surveyed.
• Survey results don’t direct improvement activity.
Taking data that are easily available is Deming’s “simple… obvious… wrong” and expensive approach.
Don’t get me wrong. Surveys can be useful. After interviews and focus groups, surveys can be used to test more specific conclusions drawn from the group in the data gathered face to face. Even that has issues.
To be continued.
Davis Balestracci is a member of the American Society for Quality and past chair of its statistics division. He would love to wake up your conferences with his dynamic style and unique, entertaining insights into the places where process, statistics, and quality meet. Visit his web site at www.dbharmony.com .
|