All Features
Donald J. Wheeler
Many articles and some textbooks describe process behavior charts as a manual technique for keeping a process on target. For example, in Norway the words used for SPC (statistical process control) translate as “statistical process steering.” Here, we’ll look at using a process behavior chart to…
Donald J. Wheeler
As we learned last month, the precision to tolerance ratio is a trigonometric function multiplied by a scalar constant. This means that it should never be interpreted as a proportion or percentage. Yet the simple P/T ratio is being used, and misunderstood, all over the world. So how can we properly…
Harish Jose
The success run theorem is one of the most common statistical rationales for sample sizes used for attribute data.
It goes in the form of:
Having zero failures out of 22 samples, we can be 90% confident that the process is at least 90% reliable (or at least 90% of the population is conforming).
Or…
Donald J. Wheeler
A simple approach for quantifying measurement error that has been around for over 200 years has recently been packaged as a “Type 1 repeatability study.” This column considers various questions surrounding this technique.
A Type 1 repeatability study starts with a “standard” item. This standard…
Kari Miller
Since 2010, citations for insufficient corrective action and preventive action (CAPA) procedures have been at the top of the list of the most common issues within the U.S. Food and Drug Administration (FDA) inspections, particularly for the medical device industry. Issues can occur while…
Donald J. Wheeler
Chunky data can distort your computations and result in an erroneous interpretation of your data. This column explains the signs of chunky data, outlines the nature of the problem that causes it, and suggests what to do when it occurs.
When the measurement increments used are too large for the job…
Donald J. Wheeler
The keys to effective process behavior charts are rational sampling and rational subgrouping. As implied by the word rational, we must use our knowledge of the context to collect and organize data in a way that answers the interesting questions. This column will show the role that sample frequency…
Harish Jose
I’m looking at a topic in statistics. I’ve had a lot of feedback on one of my earlier posts on OC curves and how one can use them to generate a reliability/confidence statement based on sample size (n), and rejects (c). I provided an Excel spreadsheet that calculates the reliability/confidence…
Donald J. Wheeler
Ever since 1935 people have been trying to fine-tune Walter Shewhart’s simple but sophisticated process behavior chart. One of these embellishments is the use of two-sigma “warning” limits. This column will consider the theoretical and practical consequences of using two-sigma warning limits.…
Donald J. Wheeler
As the foundations of modern science were being laid, the need for a model for the uncertainty in a measurement became apparent. Here we look at the development of the theory of measurement error and discover its consequences.
The problem may be expressed as follows: Repeated measurements of one…
Donald J. Wheeler, Al Pfadt
In memory of Al Phadt, Ph.D.
This article is a reprint of a paper Al and I presented several years ago. It illustrates how the interpretation and visual display of data in their context can facilitate discovery. Al’s integrated approach is a classic example not only for clinical practitioners but…
Donald J. Wheeler
The shape parameters for a probability model are called skewness and kurtosis. While skewness at least sounds like something we might understand, kurtosis simply sounds like jargon. Here we’ll use some examples to visualize just what happens to a probability model as kurtosis increases. Then we’ll…
Alan Metzel
Almost seven years ago, Quality Digest presented a short article by Matthew Barsalou titled “A Worksheet for Ishikawa Diagrams.” At the time, I commented concerning enhancements that provide greater granularity. Indicating that he would probably have little time to devote to such a project,…
Donald J. Wheeler
The computation for skewness does not fully describe everything that happens as a distribution becomes more skewed. Here we shall use some examples to visualize just what skewness does—and does not—involve.
The mean for a probability model describes the balance point. The standard deviation…
Tony Boobier
Does your use of probabilities confuse your audience? Sometimes even using numbers can be misleading. The notion of a 1-in-a-100-year flood doesn’t prevent the possibility of flooding occurring in consecutive years. This description is no more than a statistical device for explaining the likelihood…
Donald J. Wheeler
There are four major questions in statistics. These can be listed under the headings of description, probability, inference, and homogeneity. An appreciation of the relationships between these four areas is essential for successful data analysis. This column outlines these relationships and…
Donald J. Wheeler
The cumulative sum (or Cusum) technique is occasionally offered as an alternative to process behavior charts, even though they have completely different objectives. Process behavior charts characterize whether a process has been operated predictably. Cusums assume that the process is already being…
Donald J. Wheeler
Last month we found that capability and performance indexes have no inherent preference for one probability model over another. However, whenever we seek to convert these indexes into fractions of nonconforming product, we have to make use of some probability model. Here, we’ll look at the role…
Donald J. Wheeler
Many people have been taught that capability indexes only apply to “normally distributed data.” This article will consider the various components of this idea to shed some light on what has, all too often, been based on superstition.
Capability indexes are statistics
Capability and performance…
Donald J. Wheeler
Walter Shewhart made a distinction between common causes and assignable causes based on the effects they have upon the process outcomes. While Shewhart’s distinction predated the arrival of chaos theory by 40 years, chaos theory provides a way to understand what Shewhart was talking about.…
William A. Levinson
Quality-related data collection is useful, but statistics can also deliver misleading and even dysfunctional results when incomplete. This is often the case when information is collected only from surviving people or products, extremely satisfied or dissatisfied customers, or propagators of bad…
Donald J. Wheeler
Many different approaches to process improvement are on offer today. An appreciation of the way each approach works is crucial to selecting an approach that will be effective. Here we look at the problem of production and consider how the different improvement approaches deal with this problem.…
Paul Laughlin
As I started reading The Book of Why: The New Science of Cause and Effect, by Judea Pearl and Dana Mackenzie (Basic Books, 2018), I was reminded how often analysts trot out the bromide “correlation is not causation.” It’s a well-known warning. Indeed, I often encourage those learning data…
Donald J. Wheeler
Students are told that they need to check their data for normality before doing virtually any data analysis. And today’s software encourages this by automatically providing normal probability plots and lack-of-fit statistics as part of the output. So it’s not surprising that many think this is the…
Donald J. Wheeler
Acceptance sampling uses the observed properties of a sample drawn from a lot or batch to make a decision about whether to accept or reject that lot or batch. Although the textbooks are full of complex descriptions of various acceptance sampling plans, there are some very important aspects of…