STA 6166 UNIT 3 Section 2
Welcome < Begin < Unit 3 < Section 2 > Section 2 Exercises

Section 2

Multiple Comparisons

Readings Ott and Longnecker, Chapter 9, pages 427-468.
Instructor Guidance

In the previous unit, we developed tests for the hypotheses of no mean differences for more than two populations under the assumption of a common variance. What happens when we conclude that all the means are not equal? Do we leave the issue there? Of course not! Now we want to know which populations have means that are different from which other populations. We want to separate out groups of populations with common means from other groups of populations with common, but different means. The collection of procedures we use to accomplish this task is called multiple comparison or mean separation procedures.

If there are t populations, there are t(t-1)/2 total possible pair-wise comparisons. It is tempting to just go out and compare every population to every other population using something simple, like the two independent sample t-test. Of course this can be done, but unfortunately it will often lead us to wrong conclusions. From the results of this analysis we tend to find populations are different more often than we should. Part of the problem is that we are not using all the available information to compute the best estimate of the common underlying variance parameter. The other is that we are not taking into account the sampling variability of the sample mean. We need a better approach to this problem.

Before we develop multiple comparison procedures we need to develop an underlying general theory for comparing means from populations. The theory of linear contrasts provides us this foundation. Linear contrasts are simply weighted sums of means (Def 9.1, page 432) where the sum of the weights are equal to one. We can easily compute a estimate for the variance of a linear contrast and with this ability develop an appropriate hypothesis test based on the F-distribution (page 436).

In section three we discuss the differences between experimentwise and comparisonwise error rates. This section illustrates how relying on comparisonwise error rates alone can get us into trouble when we attempt to make inferences to multiple populations. By controlling experimentwise error rates we can be more confident that when significant differences between two means are found they are truly important and not some characteristic of random sampling. Note, the Bonferonni inequality is often referred to in research literature in conjunction with multiple comparisons, so it is best that you really understand what it is saying.

The text discusses a number of multiple comparison: Fisher's LSD, Tukey's W procedure, SNK procedure, Dunnett's and Scheffe's S method. In the course notes I also talk about two other important procedures, the Duncan and Waller-Duncan procedures. In the notes I have tried to simplify the form and application of multiple comparison procedures to clarify the computations and comparisons performed in making the comparisons. These simple to perform comparisons turn out to be an extremely powerful adjunct to the analysis of variance.

PPT Lecture

Linear Contrasts and Multiple Comparisons(Powerpoint) and (PDF)

Optional Activities None
Exercises To check your understanding of the readings and practice these concepts and methods, go to Unit3 Section 2 Exercises, do the exercises then check your answers from the page provided. Following this continue on to the Unit 3 Section 3.