Zero defects, error-free performance, Six Sigma -- these represent three popular ways to set performance standards for organizations and individuals. Each concept has its advantages and disadvantages.
The zero defects concept became popular in 1962 when Philip Crosby, quality manager at Martin Marietta, developed a concept based on his belief that products should be defect-free when delivered to the customer. This concept, which spread rapidly through the U.S. Department of Defense, was neither a technique nor a methodology; it embodied an attitude that Crosby sought to instill in every individual. The concept focused on an individual's commitment to always meet the engineering specification.
Zero defects proved very popular, but after a brief time in the spotlight, the concept faded away. It regained popularity during the 1980s when Crosby published Quality Is Free. The concept then extended into support areas, where it was applied not only to products but also to internal services.
Error-free performance started in the mid-1980s. It originally covered service and support groups as well as manufacturing areas. Each individual organization makes errors -- in manufacturing, management, engineering and personnel. However, only the manufacturing worker makes defects. Thus, the term "error" better met the quality needs of the late '80s and '90s, whereas "defects" reflected the '60s' concentration on quality in products.
The error-free concept used a very different implementation approach because it applied to everyone. Error-free performance accepted the reality that perfect work all the time wasn't practical. For one thing, many management processes weren't defined, and often the risks associated with them were considerable. Frequently, management can't afford to wait until it reaches a high confidence level -- 95 percent or greater -- before making a decision. And when it comes to personnel considerations, being right 99.9 percent of the time isn't a challenge; it's a near impossibility.
As a result, the error-free concept focused instead on extending the time interval between error occurrences. Everyone is considered an error-free performer, but all individuals operate at different error-free intervals. Some operate at an error-free interval of eight hours, others at 80 hours, still others at 800 hours.
Under the error-free concept, people strive to understand their average time interval between errors and set goals to improve those intervals. If an individual's present interval is 20 hours average between errors, he or she would set a goal to extend this average to 30 hours. When that goal is met, the individual sets another goal, creating a never-ending improvement driver with many positive milestones, or successes, along the way.
Motorola originated the Six Sigma program during the late 1980s. The company applied process capability study concepts to set acceptable error levels throughout the organization. Six Sigma sets acceptable performance levels similar to acceptable quality levels, but with a much more stringent requirement of 3.4 errors per million entities. Obviously, this presents very difficult, if not impossible, criteria for the management team and staff personnel to comply with. Most of Motorola's support and service areas remain in a continuous improvement mode. Compliance to the Six Sigma requirement isn't mandatory, but continuous improvement is.
Basically, Motorola's Six Sigma approach requires that the ± six sigma spread at any point in time be equal to or less than the tolerance for the activity performed. This represents a Cp of 2 (± 3s divided into the spec tolerance).
Don't be confused -- six sigma (0.002 defects per million) is much better than 3.4 defects per million. Motorola approaches Six Sigma by evaluating the item or individual at one point in time and allowing ± 1.5s for process drift over time. As a result, the company's objective of 3.4 errors per million is much less stringent than a Cpk of 2 (long-term 3s divided into specifications).
What is good enough? I don't believe there is a single answer to that question. "Good enough" is defined by many factors, including what the customer wants, what it costs to improve, when the customer wants the product delivered, what the process is capable of producing and how complex the output is.
When it gets right down to it, these three standards aren't standards at all but rather open-ended objectives.
Which of the three quality objectives is best? They all have their good and bad points. Employees who meet zero defects strive for individual commitment to continuous improvement. Error-free performance includes management as well as employees and applies a continuous improvement measurement to each individual's job. Six Sigma sets a specific goal that every employee perceives as a success point.
However, all three processes can't stand alone. They must work in partnership with comprehensive improvement processes such as total quality management or total improvement management. I've used all three processes and had success with each. How many have you used, and what did you like or dislike about them? I've set up a Web site where we can exchange ideas on the subject. Please send in your thoughts. The address is www.hjharrington.com .
About the author
H. James Harrington is a principal at Ernst & Young and serves as its international quality advisor. He can be reached at 55 Almaden Blvd., San Jose, CA 95113; telephone (408) 947-6587, fax (408) 947-4971, e-mail jharrington@qualitydigest.com . |