Control charts provide an ongoing statistical test to determine if a recent reading or set of readings represents convincing evidence that a process has changed from an established stable average. The test also checks sample-to-sample variation to determine if the variation is within the established stable range. A stable process is predictable, and a control chart provides the evidence that a process is stable—or not.
ADVERTISEMENT |
Some control charts use a sample of items for each measurement. The sample average values tend to be normally distributed, allowing straightforward construction and interpretation of the control charts. The center line of a chart is the process average. The control limits are generally set at plus-or-minus three standard deviations from the mean.
…
Comments
A few points of clarification
Fred - nice article, I look forward to the subsequent articles. The emphasis on plotting the data in production order was particularly nice to see. There are few items that I feel could benefit from some clarification as they come close to several of the myths of control charts that we just can't seem to squash.
1. While you are absolutely correct that the sample averages (for many processes) will tend to be roughly Normally distributed (Central Limit Theorem), control charts and their associated 'out-of-control' rules do not rely on the distribution of either the subgroup values or the underlying population of individual values. Too often we hear people 'freeze' when their subgroup values aren't Normally distributed...
2. The samples within a subgroup should be homogenous because they are intended to be taken from a homogenous process stream. This is probably beyond the scope of your article, but it does touch on to rationally subgroup. When our processes are not homogenous – and many aren’t – we must come up with rational subgrouping schemes to handle the non-homogeneity.
3. I believe you may have mis-communicated the relationship between the within subgroup variation and the subgroup averages. ("The intent is to detect shifts between samples because the variation within a sample is relatively small"). It is actually the opposite. The variation within a subgroup is larger than the variation of the subgroup averages when the process stream is homogenous and ‘in-control’. The control limit formulas are based on an estimate of the between subgroup averages by using the within subgroup standard deviation to estimate the population standard deviation. The standard deviation of the subgroup averages is then calculated to be the population standard deviation divided by the square root of the subgroup size, n.
4. Random samples across a period of time or within a batch may be a rational subgrouping approach, but if you have a homogenous process their variation will not be greater than taking sequential parts. If you have autocorrelation, systemic drifts such as from tool wear or decay, or you have a non-homogenous process then a random sample will have greater variation than sequential parts. However, this isn’t a signal of lack of control, it is a signal that you need to adjust your subgroup scheme…
5. And lastly, the title. Again I think you may simply mis-stated the situation? Processes can be improved even when they not in a state of statistical control or are ‘un-stable’. Certainly ‘tampering’ with a process – whether it is in control or not – won’t ‘fix’ anything. Tampering will only make the variation larger. There are many diagnostic approaches to determining causal mechanism and improving ‘unstable’ processes. Indeed, when a process goes out of control, we must use these methods to determine the cause of the excursion and return the process to its ‘stable’ baseline or better.
Add new comment