P value is by far the most prevalent statistic in the medical literature but also one attracting considerable controversy. Recently, the American Statistical Association1 released a policy statement on P values, noting that misunderstanding and misuse of P values is an important contributing factor to the common problem of scientific conclusions that fail to be reproducible. Furthermore, reliance on P values may distract from the good scientific principles that are needed for high-quality research. Mark et al2 delve deeper into the history and interpretation of the P value in this issue of JAMA Cardiology. Herein, we take the opportunity to state a few principles to help guide authors in the use and reporting of P values in the journal.
When the limitations surrounding P values are emphasized, a common question is, “What should we do instead?” Ron Wasserstein of the American Statistical Association explained: “In the post p<0.05 era, scientific argumentation is not based on whether a p-value is small enough or not. Attention is paid to effect sizes and confidence intervals. Evidence is thought of as being continuous rather than some sort of dichotomy…. Instead, journals [should evaluate] papers based on clear and detailed description of the study design, execution, and analysis, having conclusions that are based on valid statistical interpretations and scientific arguments, and reported transparently and thoroughly enough to be rigorously scrutinized by others.”3
We suggest that researchers submitting manuscripts to JAMA Cardiology should also consider the following:
Data that are descriptive of the sample (ie, indicating imbalances between observed groups but not making inference to a population) should not be associated with P values. Appropriate language, in this case, would describe numerical differences and sample summary statistics and focus on differences of clinical importance.
In addition to summary statistics and confidence intervals, standardized differences (rather than P values) are a preferred way to exhibit imbalances between groups.
P values are most meaningful in the context of clear, a priori hypotheses that support the main conclusions of a manuscript.
Reporting stand-alone P values is discouraged, and preference should be given to presentation and interpretation of effect sizes and their uncertainty (confidence intervals) in the scientific context and in light of other evidence. Crossing a threshold (eg, P < .05) by itself constitutes only weak evidence.
Researchers should define and interpret effect measures that are clinically relevant. For example, clinical importance is often difficult to establish on the odds ratio scale but is clearer on the risk ratio or absolute risk difference scale.
In summary, following Mark et al,2 we encourage researchers to focus on interpreting clinical research data in terms of treatment “effect” magnitude and precision, using P value only as one of many complementary tools in the statistical toolbox.
Conflict of Interest Disclosures: Both authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest, and none were reported.
Thomas LE, Pencina MJ. Do Not Over (P) Value Your Research Article. JAMA Cardiol. 2016;1(9):1055. doi:10.1001/jamacardio.2016.3827