[Skip to Navigation]
Sign In
Editor's Note
December 2016

Do Not Over (P) Value Your Research Article

JAMA Cardiol. 2016;1(9):1055. doi:10.1001/jamacardio.2016.3827

P value is by far the most prevalent statistic in the medical literature but also one attracting considerable controversy. Recently, the American Statistical Association1 released a policy statement on P values, noting that misunderstanding and misuse of P values is an important contributing factor to the common problem of scientific conclusions that fail to be reproducible. Furthermore, reliance on P values may distract from the good scientific principles that are needed for high-quality research. Mark et al2 delve deeper into the history and interpretation of the P value in this issue of JAMA Cardiology. Herein, we take the opportunity to state a few principles to help guide authors in the use and reporting of P values in the journal.

When the limitations surrounding P values are emphasized, a common question is, “What should we do instead?” Ron Wasserstein of the American Statistical Association explained: “In the post p<0.05 era, scientific argumentation is not based on whether a p-value is small enough or not. Attention is paid to effect sizes and confidence intervals. Evidence is thought of as being continuous rather than some sort of dichotomy…. Instead, journals [should evaluate] papers based on clear and detailed description of the study design, execution, and analysis, having conclusions that are based on valid statistical interpretations and scientific arguments, and reported transparently and thoroughly enough to be rigorously scrutinized by others.”3

We suggest that researchers submitting manuscripts to JAMA Cardiology should also consider the following:

  1. Data that are descriptive of the sample (ie, indicating imbalances between observed groups but not making inference to a population) should not be associated with P values. Appropriate language, in this case, would describe numerical differences and sample summary statistics and focus on differences of clinical importance.

  2. In addition to summary statistics and confidence intervals, standardized differences (rather than P values) are a preferred way to exhibit imbalances between groups.

  3. P values are most meaningful in the context of clear, a priori hypotheses that support the main conclusions of a manuscript.

  4. Reporting stand-alone P values is discouraged, and preference should be given to presentation and interpretation of effect sizes and their uncertainty (confidence intervals) in the scientific context and in light of other evidence. Crossing a threshold (eg, P < .05) by itself constitutes only weak evidence.

  5. Researchers should define and interpret effect measures that are clinically relevant. For example, clinical importance is often difficult to establish on the odds ratio scale but is clearer on the risk ratio or absolute risk difference scale.

In summary, following Mark et al,2 we encourage researchers to focus on interpreting clinical research data in terms of treatment “effect” magnitude and precision, using P value only as one of many complementary tools in the statistical toolbox.

Back to top
Article Information

Conflict of Interest Disclosures: Both authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and none were reported.

References
1.
Wasserstein  RL, Lazar  NA. The ASA’s statement on P-values: context, process, and purpose. http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108. Published 2016. Accessed August 19, 2016.
2.
Mark  DB, Lee  KL, Harrell  FE  Jr.  Understanding the role of P values and hypothesis tests in clinical research [published online October 12, 2016].  JAMA Cardiol. doi:10.1001/jamacardio.2016.3312Google Scholar
3.
McCook  A. We’re using a common statistical test all wrong: statisticians want to fix that. http://retractionwatch.com/2016/03/07/were-using-a-common-statistical-test-all-wrong-statisticians-want-to-fix-that/. Accessed August 19, 2016.
×