Hypothesis testing is the basis of everything we do in statistics, including not only DOE but also statistical process control (SPC) and acceptance sampling. These activities begin with a null hypothesis that is similar to the presumption of innocence in a criminal trial. We assume that there is no difference between the control or the experiment, that the process is in control, or that a production lot is acceptable.
We must prove beyond a quantifiable reasonable doubt (the significance level or Type I risk) that the alternate hypothesis is true instead. The alternate hypothesis is that the experiment is different than the control, the process is out of control, or the production lot must be rejected. There is also a Type II risk of not rejecting the null hypothesis when it should be rejected (similar to acquitting a guilty defendant), and this can be reduced by obtaining more data. Selection of an adequate sample size is always important in statistical activities; it is why statisticians always want more data.
Day 1 Schedule
- Benefits of designed experiments
- Hypothesis testing
- Factors, levels, and interactions.
- Basics of Randomization, blocking, and replication
- Box and whisker plot; an excellent way to visualize data and differences between treatments
- Tests for the difference in means between two or more populations.
- t test and paired comparison t test
- Analysis of Variance
- The assumption of normality (a bell curve distribution) behind traditional statistical methods.
- Test these assumptions as a mandatory follow-up to a designed experiment.
- Chi square test for goodness of fit
- Probability plot
- Anderson-Darling test
- Apply remedies and alternatives when the assumption is not met.
- Nonparametric method: Kruskal-Wallis test
Day 2 Schedule
- Use two-way Analysis of Variance (ANOVA) to determine:
- Whether both, one, or neither of two factors (e.g. pressure and temperature) has a significant effect on the outcome of an experiment
- Whether interactions between the factors are present
- Introduction to Factorial Designs
- Understand the basis of linear regression:
- Taylor series and why it is dangerous to extrapolate beyond the region of an experiment
- Develop and test linear regression models of the form y=f(X)
- Perform residual analysis
- Model building: test terms for significance, and remove unimportant ones to simplify the model.
- Relationship between regression and ANOVA
- Apply linear regression to two or more factors
- Use the power law, and dimensional analysis, to develop physical models
- Be aware of some pitfalls in regression, and the possible need for transformations.
- Design outliers (points outside the region of interest that exert excessive leverage on a regression model)
- Response outliers (responses that could have been affected by special or assignable causes, and that will unduly influence the model)
- Multicollinearity, or dependence of one design variable on another
William A. Levinson
Levinson Productivity Systems
For more information about this conference visit http://www.researchandmarkets.com/research/zz79jg/introduction_to