Statistical significance is one of the most used and probably most misunderstood statistical concepts. Many authors, reviewers, editors and readers consider statistical significance to be an indication of a real, clinically meaningful, effect from exposure to some factor and statistical insignificance an indication that no such effect exists.

However, while statistical significance can be thought of as indicating evidence of an (possibly irrelevant and not necessarily causal) effect, statistical insignificance indicates nothing but absence of evidence.

The authors of a recent Dutch paper (1) in Nature Neuroscience had come across several statements like:

The percentage of neurons showing cue-related activity increased with training in the mutant mice (p<0.05), but not in the control mice (p>0.05)

and recognized that this was a particularly common error in the neuroscience literature. They reviewed 513 behavioural, systems and cognitive neuroscience articles in 5 top-ranking journals (Science, Nature, Nature Neuroscience, Neuron, and The Journal of Neuroscience). About as many of the published articles presented findings based on erronous statistical comparisons, 79, as correct ones, 78.

As a difference in statistical significance (p0.05) requires a smaller effect difference and lower sample sizes than a statistically significant effect difference, the consequence of this statistical mistake is that the uncertainty of published neuroscience findings is underestimated in a large number of scientific publications. The underestimation may be substantial.

The review ends with a call for greater awareness of the phenomenon among researchers and reviewers, and with the suggestion to use confidence intervals instead of p-values when presenting uncertainty.

**References**

1. Nieuwenhuis S, Forstmann BU, and Wagenmakers EJ. Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci 2011;14:1105-1107.