Some time back, I was working as a Six Sigma Black Belt within the customer service department of a credit card organization. Our processes included responding to customer emails and mail correspondence.
ADVERTISEMENT |
One fine day the general manager of my process called to say we were getting additional business, and she was just about to sign the statement of work (SOW) in the next five minutes. She further communicated that one of the clauses in the SOW was to meet a quality score of 90 percent. This was a critical service-level agreement because missing this metric would mean a heavy penalty each month. She wasn’t sure if this 90-percent target was achievable. She also didn’t know whether the previous vendor met this threshold. Additionally, the business owner was pushing her to sign the SOW because it also stated the previous vendor met the 90-percent quality scores comfortably.
I understood the scenario and its criticality. I asked if she could get access to previous vendor’s quality score month on month. She did have the data, and I received it in about two minutes.
…
Comments
A couple of questions
Interesting story...You say you had several months of data that you used for this test. Did you put it into a control chart, to see if the data exhibited any significant non-homogeneity? Without that, there's certainly no point in testing for normality, since you can't make any distributional assumptions; and there's not much point in checking for normality before doing a t-test anyway; t-tests are very robust. Joel Smith from Minitab recently demonstrated that, very convincingly, in a webinar. He tested 10,000 samples of sizes from 10-100 drawn from each of many distributions, including some very skewed distributions. The very skewed distributions had somewhat higher Type I error rates, but not by much...and once sample sizes got to about 30, the error rates were only .07, or .02 above expected.
XmR Chart
Why not track the monthly data on an XmR chart and see what you get for the LCL? Is it greater than .75?
Rich DeRoeck
p value and interpretation
I noticed you said the p value was less than 5% and that the upper bound of the quality score was 73.8%. So you rejected the null hypothesis in favor of the alternate. If I'm understanding this example correctly, was the null (the status quo) that the quality level was >= 90% and the alternate (what you hypothesize) that the quality was then <90%? Also, was the quality score a measure of a proportion of successes or failures, or actual variable data with a unit of measure? I love this stuff, and I'm just trying to understand. Thanks, Mike White
Agree with control chart approach
I do agree with the other 2 posters, if you have enough data then a control chart is a much better approach and the upper control limit is a much beter estimator of the maximum likely value witout process improvement. If you only have a few months of data then a t test might beequivalent. But plotting time series data in it's native time series is always more insightful than a summarized statistic.
Another thought is that quality scores (not knowing how this specific one is calculated) are typically not continuous data but some kind of ranking score...ordinal data at best. This is why the IMR chart is a good choice...
Score
What is "a quality score of 90 percent" ?
How is this calculated ?
What does it mean ... or does it have meaning ?
Add new comment