Planning the size of a study is typically premised on the idea of statistical power. Power indicates the probability of finding an effect to be “statistically significant.” This reliance on statistical significance, and the related statistic, the P-value, is a drawback. Significance testing reduces all study outcomes to one of two possibilities, statistically significant or not. This degradation of information commonly leads to misinterpretations of results.
In an article published in Epidemiology, epidemiologists Kenneth Rothman of RTI Health Solutions and Sander Greenland with the University of California, Los Angeles address the pitfalls of using statistical power and suggest how to plan study size based on precision goals instead. Rothman says that “as we move away from using significance testing for interpretation of study results, we should likewise be moving away from planning a study’s size on statistical power.”
Moving Toward Precision
The article encourages researchers to think of a study as a measurement exercise, with the study size being a determinant of the measurement precision. Precision is generally expressed using a range of values called a confidence interval.
Authors Rothman and Greenland present formulas that can be used to plan the size of a study to produce the desired width of a confidence interval measuring the study result. Also included are formulas for estimating study size based on the probability that the upper limit stays below a specified level of concern.
In 2016, the American Statistical Association (ASA) issued a statement aimed at providing guidance on the conduct and interpretation of quantitative science. In it, they specifically address statistical significance and P-values. “The p-value was never intended to be a substitute for scientific reasoning,” said Ron Wasserstein, the ASA’s executive director. The ASA statement urges statisticians and others to adopt methods that “emphasize estimation over testing such as confidence, credibility, or prediction intervals; Bayesian methods; alternative measures of evidence such as likelihood ratios or Bayes factors; and other approaches such as decision-theoretic modeling and false discovery rates.”
The formulas and discussion offered in the Epidemiology article may help researchers along the road toward improving study design and interpreting study results. You can find the article here: Planning study size based on precision rather than power. Rothman KJ, Greenland S. Epidemiology. 2018 Sep;29(5):599-603.
American Statistical Association Releases Statement on Statistical Significance And P-Values: Provides Principles to Improve the Conduct and Interpretation of Quantitative Science
March 7, 2016
More about Dr. Rothman
Kenneth Rothman, DrPH, is a Distinguished Fellow and Vice President of Epidemiology Research at RTI Health Solutions. See more of his work here - publications and professional achievements and activities.