Whenever I write about continuous improvement and lean Six Sigma, without fail I get a comment about Toyota and its quality issues. So I decided to investigate this matter further, present the facts, and let the data be the voice of reason. I do expect the proverbial “Yabut and Costello” comments—i.e., “But what about this and what about that?” My goal, though, is to compare trends over time and across the automotive industry.
ADVERTISEMENT |
In summary, when compared to the other automobile manufacturers, Toyota’s quality issues are far less severe than what we are lead to believe. However, from a lean Six Sigma perspective, there is much work to be done, and here are the data to demonstrate this.
…
Comments
Toyota Quality
While I agree that from a Six Sigma perspective the measurable of defects per 100 vehicles are not even close to the 3.4 PPM, what you are not considering is the incredible complexity in an automobile and the number of opportunities for defects. I have managed supplier quality from both an OEM and supplier perspective, and we have many suppliers delivering parts that go into these automobiles that routinely achieve PPM levels of less than 100. The OEM's also have internal manufacturing operations making literally thousands of complex defect free parts per day (i.e. crankshafts, cylinder blocks, transmission cases, valve bodies) with extremely tight tolerances. A typical crankshaft has over 200 individual tolerances and automakers strive for at least 4 sigma (1.33 Cpk) for all of them and 5 sigma (1.67 Cpk) for critical features. In the event of lower capability, 100% inspection is used to insure defects are not accepted.
Customer complaints can be for simple things such as subjective items like the appearance of interior trim, or assembly defects leading to squeaks and rattles or other NVH issues. It is not an easy task to make a perfect vehicle for all tastes. I think all automakers are doing a pretty good job, and Toyota is better than most.
I agree with Mitchell's comments
3.4 DPMO is a compelling target, but citing it puts your (Kyle's) credibility at risk. Another column in this issue, referring to the 30,000 foot view, asked if executives were asking the correct questions. You haven't shown how focusing on a single aggregate quality metric would advance any automaker's cause, and you omitted perhaps the most meaningful point you could make, which is that Toyota itself has recognized that they have grown beyond their ability to maintain their aggressive (my word) learning culture and that they have a sensei shortage which they are, presumably, addressing. Lean and Six Sigma are just shadows of TPS.
A trend?
I have to agree with almost all these comments...we'd all love to be defect-free. However, if your 100 defects are all typos in your manual and bluetooth sync process complaints, and your 100 of your competition's wheels fell off or brakes locked up, I think it's reasonable to conclude that you have fewer quality problems than your competition. While I believe that there's nothing wrong with Six Sigma as a goal, I also believe that 3.4 million is nonsense, based on a mistaken idea that you could somehow sustain an undetected 1.5-sigma shift in a controlled process. While it made sense for Mikel Harry to have his engineers shift all their component means 1.5 sigma in the worst-case direction in simulations--to test designs for robustness--it was never intended to be a metric, much less a production metric. Six Sigma Black Belts and Master Black Belts (full disclosure: I have both certifications) will gain some credibility when they learn enough to decide to abandon that 1.5-sigma shift in favor of the lessons of the Taguchi loss function. What we have to be more concerned with is getting on target and constantly reducing variation. DPMO can be a very useful and intuitive estimate for one aspect of quality (in context)...turning it into a process sigma is a victory of computation over common sense.
My larger concern is the idea that somehow these numbers represent a "trend." The old saying still holds: "Any fool can make a trend out of two numbers." There's nothing in the data to suggest a trend. Some numbers were higher this year, and some were lower...that's to be expected; it will happen most years. Unless we have enough of those numbers to see if one has shifted or trended significantly (special cause), there's no "trend." For more on this see Don Wheeler's "Lies, Damned Lies, and Teens who Smoke." It's one of the best articles ever published in Quality Digest.
What is truly significant is the fact that the numbers are as low as they are. As a survivor of the low-quality days of the '70s in the auto industry, I can tell you that Toyota led the way, and everyone else has been playing catch-up ever since. The fact that Ford and GM are in the same bar chart as Toyota points to a major improvement over the past 30 years. In the '70s, it was a GIVEN in the US auto industry that you didn't even try to start a new car when it rolled off of final assembly. You pushed it to a bay where a rework mechanic would work on it to get it to run.
Thank for your comments
Trends
The author's response to the comment regarding trends is confusing. The author stated "The comment I made about trend you are correct in saying that would be premature with only two data points and I do recognize that."
This conflicts with:
1. the author's objective of the article as stated in the first paragraph, "My goal, though, is to compare trends over time and across the automotive industry."
2. the section heading "Trend in Initial Quality Study", which was included directly from the original article as published on the author's website. (the link was provided at the end of the Quality Digest article)
3. the author’s “Concluding thoughts” that state “Although more years of data would be required to see for certain if the quality trends observed in 2011 and 2012 are likely to continue,” which clearly claim a trend from two data points of 2011 and 2012 and furthers this thought by stating that additional data would be needed to improve upon this claim with more certainty. This misses the point that with two data points there is no certainty of a trend, as there is no trend.
In spite of the author's ultimate acknowledgement that two data points do not indicate a trend, there is no doubt from the article that drawing a conclusion of a trend in the data was the author's intent.
However, this is not the main problem with the article. The main problem stems from the statement “from a lean Six Sigma perspective, there is much work to be done, and here are the data to demonstrate this.” The primary fallacy lies with the premise that one can use data to demonstrate the need for improvement activity. Continuous improvement is a philosophical belief. You either believe this as a fundamental principal or you do not. The premise of using data to support the need for improvement leads to the converse that there is a point where the data would indicate that improvement is no longer needed. I am confident that Toyota does not use data to demonstrate the need for improvement. Anyone following this logic would ultimately be out of business. Data, however, can be used to help prioritize where resources are to be directed for improvement, not if they will be directed for improvement.
Thank you
I am not certain of your name however thank you for your comments. My intention in the concluding remarks was to highlight the fact that more data was needed to conclude if there was a trend. Perhaps it could have been worded as I have stated and made it more precise in the article.
The logic you present for continuous improvement - Would this not hold true only if there was a point when no data existed? What happens if that point never exists?
In addition, if you read a report called "Toyota's Secret A3 report" by John Shook in MIT Sloan review summer of 2009 the methodology outlined in this article clearly shows a data driven approach. The A3 report is a tool used for constant improvement. (pg 31 of the article I sited). According to the article the author worked with Toyota.
Your quote
Dear Rib,
you said "However, if your 100 defects are all typos in your manual and bluetooth sync process complaints, and your 100 of your competition's wheels fell off or brakes locked up, I think it's reasonable to conclude that you have fewer quality problems than your competition"
I disagree. It does not means you have FEWER problems. It just means the IMPACT of your quality issues is less.
If you make the perfect car, which has only one mistake: It blows up when you start it, then in actuality you have the least amount of quality mistakes. However, as the result of a 100% blowing up of cars will result in no one being able to use the product, means that the impact of that quality will be high, even if your IQS score will still be a top10 noting (as no one who blew up will give back a report, giving you a score of 0 on the IQS).
The problem in this discussion is the difference between the word and interpretation of "QUALITY". Some use quality here as part of a "customer satisfaction", some use it as a reference to the size of the problem "wheel falling off being a bigger issue then a button not working", which makes this article and subsequent comments/discussions go on endlessly.
Thank you for your comment
Thank you Mitchell for your comments
I agree that Defects Per Million Opportunities (DPMO) is a more useful measure unfortunately the quality survey (IQS) used for this article does not measure this.
The points or limitations that you raise Mitchell with respect to the survey (i.e. it includes simple customer complaints) I have read before and I am not certain I completely agree with this point. Although not every defect is the same or should be treated the same (this is a good point to bring up) I think that any deviation from customer requirements and or specifications should be included or considered in your DPMO or PPM measure if you want to capture the whole of customer experience. I would however agree with applying different weightings to different defect types as part of measuring DPMO or PPM.
Finally In the article trends are presented which demonstrates significant quality improvements for Toyota and other manufactures and, that amoungst other manufacture's Toyota fairs well with respect to quality. Although not covered in the article the IQS (from 2011 to 2012) showed an increase in defects per 100 vehicles in the area of new technology like bluetooth. So is there opportunity for improvement, sure there is.
Thanks very much for your comment, you raise great points.
Kyle
Dear Kyle,
I think you need to separate two things:
1) Customer Experience or satisfaction is NOT the same as Quality
2) Defects per Million Opportunities is not the same as Defects per Million products
It's apples and oranges, and in my opinion this entire article has been nothing but that. I fail to see the relation between IQS and the Toyota Quality problems, as the IQS is subjective and has no scaling or separation in actual quality problems and human perception/opinion of a problem.
Thank you
Thank you Mark for your comments.
For a definition on quality I will refer you to ISO 8402-1986 standard. It defines quality as "the totality of features and characteristics of a product or service that bears its ability to satisfy stated or implied needs." Therefore would customer satisfaction not be a basic need from most customers?
With respect to number 2) DPMO and Defects per million products are different, yes you are correct.
In terms of your comments to RIP (not Rib) I am not certain that was his point. The way I read his statement (and this is only my view) it would be aligned with your comment that the IQS is subjective. That point I would agree with. The majority of the article spells out how "according to the IQS" for the last 2 years (not a trend) Toyota fairs well compared to other companies. I think RIP brings up some very valuable points about the six sigma target and I would think that would be a topic of discussion. The Taguchi loss function does have its limitations and critics.
The reason I wrote the article was because often when I write about continuous improvement I hear about Toyota and its quality problems (which takes away from the need to always improve). Is there room for improvement? I think there is always room for improvement for any company in any industry. Does this take away from the fact that Toyota ranks well? Absolutely not, they do rank well.
On a side note I do own a Toyota and am happy with it. Does it have issues? Yes, many times when I am speaking on the hands free blue tooth people say they only capture every 3rd word and that my voice fades in and out. This is not an issue I have in our other vehicle so I know it is not my i phone. I thank you for your comments.
Quality
Hi Kyle,
To first give the quick summery=>
Using the IQS of two years and based on this small amount of subjective information concluding that Toyota has no Quality issue, is not a proper conclusion or analyses and has nothing to do with Six Sigma. I think that's the point Rip (and maybe me?) are trying to make.
Now for the longer stuff:
I am aware of ISO's definition of quality, however in the real world and with real work people, quality has a different meaning. Quality exists at every level and every stage of a product and at any stage it has a different meaning.
For Quality Engineers for instance, quality could simply be the conformity of a product to it's drawings/specifications. For the production manager, he might just care about the stability of his process as quality, instead of the product itself. Everyone has his own opinion or idea with quality. So when discussion "Toyota's quality issues" one needs to be clear which quality you mean.
Current quality issue that people talking out is non-functional products (IE, wheels 'n brake issues).
As for the IQS:
The IQS measures customer opinion, by definition, a subjective matter. People buying a FORD have an expectation that is different then people buying a Mercedes, so the feedback coming from a Ford customer will be as per a difference set of tolerances/requirements then a Ford. Hence the IQS can in my opinion not be used to property define the quality of one brand versus the other.
Using tools like statistics in any situation means you always have to think back to the basics of "Is this tool suitable for this job" and what are the limitations of my tool. I can have the perfect electrical drilling machine in the world, but when trying to get a nail in the wall, I want a hammer, and hitting the nail in with the grip of my fancy drilling machine is not the right way to go forward.
Regards,Mark
Mark I think you may want to
Not really
Dear Kyle,
Not really.
What I bring up with Rip is that his example is in my opinion incorrect. He is looking at the "quality (AKA severeness)" of the problem as a measure of how big someone's quality problem is, instead of looking at the quantity of problems.
By using the IQS you are looking at the quantity of the problems, without really looking at the quality (severeness) of the problem.
Both are wrong and right at the same time. The reason why it is wrong for this article, is because you have to clearly define what quality we are talking about. When you talk about quality problems in context with six sigma and/or the Toyota way, you can talk about Product quality, Process quality, quality/proper functioning of your Quality system.
In you article, It seems you are taking the product quality issues (sever high impact once), linking them to the IQS (Which is a quantity measurement of problems), and concluding the problems are not big.
However, when having low amount of big impact problems, you have a real product quality problem (the wheel come of or car blow up examples)
When you have a high amount of low (or high) impact problems, you seem to have a real product quality problem (even though if the customer finds it acceptable, it will not show up in the IQS)
But neither of these give any feedback or proof on the quality of the Quality system/functioning of the Toyota way/functioning of six sigma/etc within Toyota.
And with the way the article is written, it seems as if you are trying to conclude something about this last point. And that in my opinion cannot be done with the information in your article.
So to conclude:
1) You are taking product quality/process quality/quality of quality-system and confusing those a bit for the readers (By using the subject and by your non-defining or changed definition of the word "quality" trough-out the article. [Comparing apples and oranges]
2) You draw a statistical conclusion which you cannot conclude from the data you have. Good reference RIP gave already, to the brilliant article on this => Don Wheeler's "Lies, Damned Lies, and Teens who Smoke
Hope this reply clarifies my answers a bit better.
Regards,Mark
If Toyota was stupid enough
If Toyota was stupid enough to have a Six Sigma program, all they would have to do to reduce defects to any level they wished, would be to follow the advice of Bill Smith, the founder of Six Sigma, and broaden specification limits.
Anyone who doubts this mind numbing fact should obtain a copy of Mr Smith's paper and read his nonsense for themselves.
The rubbish behind the "3.4" is an even greater farce.
Toyota quality
I agree with Rip. Although I have not been in the auto business as long, I remember working in a GM engine plant in the early 80's where very few of the TPS lean methods were being implemented (TPS was a paradigm shift that had not taken hold yet). We had an automatic transfer assembly line with a 15 second cycle time, and it never stopped. Any misbuild was removed to be repaired later (usually weekend work for overtime pay - how's that for an incentive to build quality!). The machining departments created huge buffers of inventory to prevent line stoppages, and quality issues were sorted by end of line 100% automatic inspection gauges.
A lot has been learned since then, but misakes are still being made. I have also been black belt trained, and also believe the 3.4 PPM and the 1.5 sigma mean shift is rubbish. I also see a misguided focus on indices (Cpk, Ppk, etc.) instead of striving to understand processes and stability. Examples of this: people are using software tools to fit distribution models to unstable data in order to report indices to management. This practice has even found its way into various ISO/DIN standards.
I appreciate Quality Digest for publishing articles such as this, they provoke thoughtful comments from people with more expertise than I have.
t
t
Toyota
Toyota
Kaizen culture
It all depends upon 'what' specification limits are called as a problem and 'how' it's defined.
Add new comment