The RCT methodology was originally established to establish the efficacy or effectiveness of a treatment, to establish whether T is better than C. In recent years attention has also been paid to issues of clinical significance, as well as to noninferiority and equivalence. Misconceptions concerning statistical significance, clinical significance, equivalence, and noninferiority have unnecessarily confused the issue of the placebo control.
The basic requirements of an RCT still apply to equivalence and noninferiority trials: the necessity of an appropriate control group, randomization to T versus C groups, blinded assessment of response, and analysis by intention to treat. In the study population, there is an unknown effect size, delta (tf), that is zero when there is absolutely no differential response between T and C, is positive when T tends to be better than C, and is negative when C tends to be better than T. One way of stating the purpose of an RCT is that it is meant to estimate that unknown effect size (Borenstein 1994, 1997, 1998).
The usual two-tailed null hypothesis significance test seeks to prove beyond reasonable doubt that S is not zero. In designing such a study, a value of 5, the threshold of clinical significance, is designated—say, 5*. The test is then structured so that when 3 = 0, the probability of a significant result is less than, say, 5% (the significance level). When the magnitude of 5 is greater than 5*, the probability of a significant result is greater than, say, 80% (adequate power). Given the fact that when there is sufficient rationale and justification for proposing an RCT there is almost no realistic chance that the effect size is exactly zero, achieving statistical significance in an RCT is generally a matter of having a large enough sample size in a well-designed study with reliable outcome measures. To indicate the possible clinical significance of a statistically significant finding, the effect size and its confidence interval should be reported (as per CONSORT guidelines).
Another way of saying the same thing: with the study design described above, there is a better than 80% probability that a two-tailed 95% confidence interval for the unknown true effect size 5 will not include 5 = 0, whenever the true effect size is greater than the critical value 5*.
In contrast, to show that T is clinically superior to C, one needs to show that the entire confidence interval for that effect size is greater than 5*. To show that T and C are clinically equivalent, one needs to show that the confidence interval lies completely between -3* and 3*. To show the noninferiority of T to C, one needs to show that the entire confidence interval is less than -5*.
One can always demonstrate either noninferiority or equivalence, simply by using unreliable outcome measures, allowing deviations from treatment and measurement protocols, etc. That is, a badly conducted trial will result in an attenuated effect size between any two drugs, an effect size closer to zero, that can almost always be labeled as a noninferior or equivalent result.
This issue is highly relevant to the valid interpretation of study results because of a common confusion between a result being non-statistically significant and two drugs showing equivalence. To report a non-statistically significant result is only to admit that the sample size was not large enough, the design not powerful enough, or the measures not reliable enough to demonstrate beyond reasonable doubt that 5 £ 0. That is nowhere near the same thing as reporting a demonstration beyond reasonable doubt that -$*<$<$* (i.e., equivalence). As the old saying goes: "Absence of proof is not proof of absence."
In particular, if one randomly assigned subjects in an RCT to one of two treatment groups (T1 or T2) or to a placebo control group (C) and found no statistically significant difference between T1 and T2 but found statistically significant differences both between T1 and C and between T2 and C, that tells nothing about the possible clinical equivalence of T1 and T2. All one would know is that both T1 and T2 were shown to be better than (not necessarily even clinically superior to) placebo; the sample was not large enough to detect whatever difference there might be between T1 and T2. Any conclusion comparing T1 and T2 would be exactly the same whether or not the placebo control group were included in the design. Yet many arguments for the use of a placebo control group inappropriately reflect an effort to use the placebo control as a decoy to interpret results comparing T1 and T2.
But finally, why would it be important to establish beyond reasonable doubt the clinical equivalence of two treatments, particularly when such a result can be obtained through poor study design (e.g., choice of measurement) and execution of the RCT? Often when this question is asked, whether of a drug company representative or of an academic researcher, a complete answer often contains an implicit reinterpretation of equivalence as superiority. For example: Two treatments might have equivalent effects in reducing symptoms, but one might have a better side-effect profile. Or the two treatments might have equivalent effects in terms of both symptom reduction and side effects, but one may be far less costly or have greater ease of use than the other. In all such cases, the goal of the RCT should be to establish the clinical preference for one treatment or the other over the control condition by using an outcome measure sensitive to the specific ways one drug might be clinically preferable to another. But that would then be not an equivalence study but rather the usual type of RCT, with a primary outcome reflecting the particular way in which T is hypothesized to be superior to C.
Was this article helpful?
Stop Nicotine Addiction Is Not Easy, But You Can Do It. Discover How To Have The Best Chance Of Quitting Nicotine And Dramatically Improve Your Quality Of Your Life Today. Finally You Can Fully Equip Yourself With These Must know Blue Print To Stop Nicotine Addiction And Live An Exciting Life You Deserve!