Client A came to me for a consultation and told me upfront his manager would allow him to run only 12 experiments. I asked for his objective. When I informed him that it would take more than 300 experiments to test his objective, he replied, “All right, I’ll run 20.”
ADVERTISEMENT |
Sigh. No, he needed either to redefine his objectives or not run the experiment at all.
I never saw him again.
Client B came to me with what he felt was a clearly defined objective. He thought he just needed a 10-mintue consult for a design template recommendation. It actually took three consults with me totaling 2 1/2 hours because I asked similar questions to those required for planning the experiment I wrote about in my column from September 2016.
During the first two consults, Client B would often say, “Oh... I didn’t think of that. I’ll need to check it out.” He eventually ran the experiment, came to me with the data, and asked, “Could you have the analysis next week?” I asked him to sit down and was able to finish the analysis (including contour plots) in about 20 minutes.
…
Comments
What does DOE stand for in this instance?
Nice article, but not sure what DOE stands for in this instance
I've found it's a good practice to include what the acronym means the first time it's used and then use the acronym later, especially for any professional document.
DOE does not appear to mean Department of Energy or Education... maybe "Depending on Experience"... but not real sure. Maybe something related to operations.
Thanks again for the nice article, but I spent too much time trying to figure out what the acronym might stand for.
Darrel
Good catch
Using Standard Deviation of Process to estimate sample size
I believe the relevant standard deviation to be concerned with (when calculating the number of replicates), is that due to experimental error. That is, the standard deviation among replicate measurements. This is often much less than the typical process variation we might see assuming, we are following good experimental practices (one operator, single lot of material, or using blocking and covariates to manage variation due to these nuisance sources in the experiment). Of course it's possible that experimental error could be more than normal process variation if the going through the various setups result in unintended variation.
Which variation to use?
Thanks for commenting, Steven. I see what you mean, BUT...what happens when you try to take your results from a tightly controlled experiment into the real world environment, i.e., multiple operators and lots of material that combine at random? That's the REALITY of implementation and cannot be controlled.
A more robust approach might be judicious blocking of these nuisance (random, but very real) factors. The resulting variation would be more than your approach but more realistic and not as bad as not leveraging the power to block them -- control charts managing the process would detect special causes among those factors with the appropriate variation "yard stick." This variation is also not as naively low as controlling factors that are realistically uncontrollable.
The result you get from such a design is good only for your specific designed conditions, i.e., you have the result for THIS specific operator for THIS specific lot (enumerative). How does that help you? What is your theory about putting this result in the real world and not a lab?
As W. Edwards Deming always asked, "What can you predict?" How robust is your result? And this gets into the question, How is variation going to manifest in your results (analytic), i.e., multiple operators and multiple lots and factors you didn't even envision affecting your result? Control charts are the ways to shed insight on these factors and increase your degree of belief in your study's validity.
Davis
Reply to Steven Wachs's Comment
See below...
Add new comment