Analysis unit errors are not only common in clinical research but also in experimental. Hurlbert (1) coined the phrase pseudoreplication when he in 1984 described the results from a review of 176 papers published since 1960. He then defined pseudoreplication as the use of statistical inference to test for treatment effects where treatments are either not replicated or replicates are not statistically independent. This was as common as in 48% of all studies that used statistical tests.

The problem with pseudoreplication arises when correlated observations are analysed as if they were independent. The number of degrees of freedom can then be overestimated and the variance underestimated. The consequence is that statistical precision is overestimated, the uncertinty is underestimated.

The word pseudoreplication has since then been used frequently to describe analysis unit errors in biomedical lab experiments. Lazic (2) for example reviewed one issue of Nature Neuroscience (Volume 11, Number 8, 2008).

Of the 19 published articles 17 included statistical tests. However, of these only 3 presented sufficient information to assess whether pseudoreplcation had taken place. Two of the 3 papers had pseudoreplication. Of the 14 articles that used statistical tests but did not present sufficient information 5 (36%) were suspected of having pseudoreplication, although this cold not be determined with certainty.

It thus seems to be a common reporting problem that data and results are inadequately described in such studies.


1. Hurlbert SH. Pseudoreplication and the design of ecological field experiments. Ecol Monog 1984;54:187-211.

2. Lazic SE. The problem of pseudoreplication in neuroscientific studies: is it affecting you analysis? BMC Neuroscience 2010;11.5.