The widespread misinterpretation of <italic>p</italic>-values as error probabilities
The anonymous mixing of Fisherian (<italic>p</italic>-values) and Neyman--Pearsonian (α levels) ideas about testing, distilled in the customary but misleading <italic>p</italic> > α criterion of statistical significance, has led researchers in the social and management sciences (and elsewhere) to commonly misinterpret the <italic>p</italic>-value as a ‘data-adjusted’ Type I error rate. Evidence substantiating this claim is provided from a number of fronts, including comments by statisticians, articles judging the value of significance testing, textbooks, surveys of scholars, and the statistical reporting behaviours of applied researchers. That many investigators do not know the difference between <italic>p</italic>’s and α’s indicates much bewilderment over what those most ardently sought research outcomes—statistically significant results—means. Statisticians can play a leading role in clearing this confusion. A good starting point would be to abolish the <italic>p</italic> > α criterion of statistical significance.
Year of publication: |
2011
|
---|---|
Authors: | Hubbard, Raymond |
Published in: |
Journal of Applied Statistics. - Taylor & Francis Journals, ISSN 0266-4763. - Vol. 38.2011, 11, p. 2617-2626
|
Publisher: |
Taylor & Francis Journals |
Saved in:
Online Resource
Saved in favorites
Similar items by person
-
Hubbard, Raymond, (2017)
-
Hubbard, Raymond T., (1988)
-
Replication research's disturbing trend
Evanschitzky, Heiner, (2007)
- More ...