When did you last hear that the speaker said that there was no difference between the two groups, since the difference was "statistically insignificant"?
If your experience meets ours, there is a likelihood that this happened during the last conversation you attended. We hope that at least someone in the audience was surprised if, as is often the case, the plot or the table show that there is actually a difference. look? For several generations, researchers have been warned that a statistically insignificant result does not "prove" the null hypothesis (the hypothesis that there is no difference between groups or no effect of treatment on a measurable result) 1
We have some suggestions to keep the scientists from being the victim of these mistakes
. ] Let's understand what should be stopped: we should never conclude that there is no "no difference" or "no association" just because the value P exceeds a threshold such as 0.05 or equivalent , because trust The interval includes zero. Nor should we conclude that two studies are conflicting, since one has a statistically significant result, and the other is not. These mistakes spend on research and disinforming policy decisions
For example, let's look at a series of analyzes of unintended effects of anti-inflammatory drugs 2 . Because their results were statistically insignificant, one group of researchers concluded that the effect on the drugs was "not associated" with the new initial atrial fibrillation (the most common heart rhythm disturbance) and that the results stood unlike the earlier study with a statistically significant outcome
Now let's look at the actual data. Researchers describing their statistically insignificant results found a risk ratio of 1.2 (that is, a 20% greater risk in patients under patient influence than unexposed ones). They also revealed a 95% confidence interval that covered everything from a risk reduction of 3% to a significant increase in risk by 48% ( P = 0.091, our calculation). Researchers from earlier, statistically significant, studies have found exactly the same degree of risk 1.2. This study was simply more precise, with an interval ranging from 9% to 33% higher risk ( P = 0.0003, our calculation). It is unwise to conclude that statistically insignificant results have shown "no association" when the interval assessment included a serious increase in risk; It is equally absurd to assert that these results were opposite to the previous results, which show identical observed effect. However, these common practices show that dependence on the threshold values of statistical significance can mislead us (see "Take care of false conclusions").