RTest vs ANOVA

Using inappropriate tests of significance for the circumstances, such as testing differences between more than two groups by means of repeated ¿-tests rather than by ANOVA, is exceedingly common [35]. Don't do it! Repeating pairwise comparisons ignores the experimental effects in all but the two groups being compared and increases the number of false-positive "significant results" expected; multiple means comparisons following ANOVA correct for the increasing chances of false positives by adjusting the acceptable hypothesis rejection level.

If you have only two groups to compare to each other and they meet the assumptions of parametric tests, use Student' s t -test. If you are testing more than two groups, however, use ANOVA. Since a standard ANOVA will tell you if there is a statistically significant difference somewhere in your data sets but will not tell you where, you need to apply a multiple means comparison test after the ANOVA, to pinpoint the culprits. There are many tests available, each suited to different types of data, so it is advised that you consult the help menu of your own statistical program to decipher which one best fits your own experimental design. As a rule of thumb, do not choose to make any more comparisons than are appropriate for the goals of the experiment, or you risk to lose the ability to detect significant differences that may truly exist. This is because multiple means comparison tests apply corrections that tighten the rejection level in proportion to an increased group comparison number (i.e., effectively lowering the p-value required to find a difference significant). In general, you can make one less comparison between groups than the total number of groups, without affecting this rejection level; however, if you exceed this, the rejection level is adjusted. Dunnett's test, for instance, is suited to designs such as a dose-response curve, which include a matched control sample and several different doses; this test compares each group only to the control (not to each other), to answer the question "Which dose(s) give(s) a response" but not "Was the response to one dose different than to another?" Example 6 shows how a significant difference between the control and treatment groups could be missed if more comparisons were made than necessary.

Example 6. You wish to test for whether any of several doses of your treatment has a significant effect on your outcome. You have collected samples ", b, c, d, (i.e., four groups), where " is the control and b, c, d represent responses to three different concentrations of your treatment. Assuming that your highest dose, d, shows a p-value of 0.02 with respect to control, this difference would be reported as significant by Dunnett's test; that is, three comparisons would be made (b vs. ", c vs. ", d vs. a), one less than the total number of groups, so no adjustment to the rejection level would be necessary to compensate for multiple comparisons, and the d vs. " calculated p-value of 0.02 (<0.05) would be interpreted as statistically significant.

If instead of Dunnett's test, the Bonferroni test were chosen, six comparisons would be made (the same three as above, plus b vs. c, b vs. d, and c vs. d). Since this would be two more comparisons than the number of groups, the significance threshold of 0.05 would be adjusted to compensate for multiple comparisons (0.05/6 = 0.008), such that a computed p-value of <0.008 would be required for significance at the so-called 0.05 rejection level (i.e., p = 0.02 would not be considered significant).

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook


Post a comment