where SD1 is the standard deviation of group 1 and SD2 that of group 2 to be compared to each other (for a minimum difference from 10 to 15 units, this would be 3 and 4.5, respectively); the z-values are standard normal deviates corresponding to selected significance criteria and statistical powers (zcrit for two-tailed p < 0.05 is 1.960, zpwr for 0.80 power is 0.842; these and other values are found in standard statistical tables and software packages); and D is the minimum difference that one wishes to detect (5 units).

Round up to 10 rats in each group.

Numerous Web page calculators and statistical programs are available to perform this type of calculation for you; however, it is difficult to use them properly without a working knowledge of the factors affecting power. This subject is well explained by Eng [25,26].

If your study is already complete and you found a significant difference, you do not have to be concerned as to whether the study was sufficiently powered. This is akin to wondering if you put enough dynamite under the bridge after you have blown it to smithereens. If, however, you found no significant difference and you are feeling uncomfortable about the fact that you did not consider calculating the power before starting the study, you may be tempted to do so after the fact, by plugging the values actually observed in your study for standard deviation and the difference between the populations into the power formula. This practice, dubbed retrospective power analysis . is problematic, because it does not tell you what the power to find the smallest meaningful difference was, only what the power was to find statistically significant the difference that actually existed between the groups in your analysis; furthermore, this "observed" power will be inversely related to the p-value observed, so it does not add meaningfully to the information already obtained with the p value -26] . Instead, approaches involving confidence intervals or chi-square tests are recommended to guide the interpretation of negative results [27]. Some practical considerations about power and sample size are discussed further on page 264.

4. Have you planned in advance which groups you are going to compare to which, for significant differences? Planned comparisons allow the researcher a greater rejection level a or p-value than do unplanned comparisons. This is because if you first look at your data, then choose your comparisons, you are introducing bias into your analysis, and the rejection level will have to be tightened to compensate for that -28] . Planning ahead also allows you clear- headed time to consider which comparisons you really need when your design includes many different groups. The more comparisons you make, the more the p- value will have to be adjusted against false positives, in effect making it harder for you to see a true positive. Therefore, it is important not to compare groups that are irrelevant to your experimental goals. This is discussed further on page 272 and in Example 6.

Project Management Made Easy

Project Management Made Easy

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.

Get My Free Ebook

Post a comment