World Library  
Flag as Inappropriate
Email this Article

Statistical significance

Article Id: WHEBN0000160995
Reproduction Date:

Title: Statistical significance  
Author: World Heritage Encyclopedia
Language: English
Subject: Stephen Ziliak, Volcano plot (statistics), P-value, Statistical hypothesis testing, Statistics
Collection: Hypothesis Testing
Publisher: World Heritage Encyclopedia

Statistical significance

In statistics, statistical significance (or a statistically significant result) is attained when a p-value is less than the significance level.[1][2][3][4][5][6][7] The p-value is the probability of obtaining at least as extreme results given that the null hypothesis is true whereas the significance or alpha (α) level is the probability of rejecting the null hypothesis given that it is true.[8] As a matter of good scientific practice, a significance level is chosen before data collection and is usually set to 0.05 (5%).[9] Other significance levels (e.g., 0.01) may be used, depending on the field of study.[10]

Statistical significance is fundamental to statistical hypothesis testing.[11][12] In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.[13][14] But if the p-value is less than the significance level (e.g., p < 0.05), then an investigator may conclude that the observed effect actually reflects the characteristics of the population rather than just sampling error.[11] Investigators may then report that the result attains statistical significance, thereby rejecting the null hypothesis.[15]

The present-day concept of statistical significance originated with Ronald Fisher when he developed statistical hypothesis testing based on p-values in the early 20th century.[16][17][18] It was Jerzy Neyman and Egon Pearson who later recommended that the significance level be set ahead of time, prior to any data collection.[19][20]

The term significance does not imply importance and the term statistical significance is not the same as research, theoretical, or practical significance.[11][12][21] For example, the term clinical significance refers to the practical importance of a treatment effect.


  • History 1
  • Role in statistical hypothesis testing 2
  • Stringent significance thresholds in specific fields 3
  • Effect size 4
  • See also 5
  • References 6
  • Further reading 7
  • External links 8


The concept of statistical significance was originated by Ronald Fisher when he developed statistical hypothesis testing, which he described as "tests of significance", in his 1925 publication Statistical Methods for Research Workers.[18][16][17] Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.[19] In their 1933 paper, Jerzy Neyman and Egon Pearson recommended that the significance level (e.g. 0.05), which they called α, be set ahead of time, prior to any data collection.[19][20]

Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed, and in his 1956 publication Statistical methods and scientific inference he recommended that significant levels be set according to specific circumstances.[19]

Role in statistical hypothesis testing

In a two-tailed test, the rejection region for a significance level of α=0.05 is partitioned to both ends of the sampling distribution and makes up 5% of the area under the curve (white areas).

Statistical significance plays a pivotal role in statistical hypothesis testing, where it is used to determine whether a null hypothesis should be rejected or retained. A null hypothesis is the general or default statement that nothing happened or changed.[22] For a null hypothesis to be rejected as false, the result has to be identified as being statistically significant, i.e. unlikely to have occurred due to sampling error alone.

To determine whether a result is statistically significant, a researcher would have to calculate a p-value, which is the probability of observing an effect given that the null hypothesis is true.[7] The null hypothesis is rejected if the p-value is less than the significance or α level. The α level is the probability of rejecting the null hypothesis given that it is true (type I error) and is most often set at 0.05 (5%). If the α level is 0.05, then the conditional probability of a type I error, given that the null hypothesis is true, is 5%.[23] Then a statistically significant result is one in which the observed p-value is less than 5%, which is formally written as p < 0.05.[24]

If the α level is set at 0.05, it means that the rejection region comprises 5% of the sampling distribution.[25] These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution. One-tailed tests are more powerful than two-tailed tests, as a null hypothesis can be rejected with a less extreme result. Nevertheless, the use of a one-tailed test is dependent on whether the research question specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better.[26]

Stringent significance thresholds in specific fields

In specific fields such as particle physics and manufacturing, statistical significance is often expressed in multiples of the standard deviation or sigma (σ) of a normal distribution, with significance thresholds set at a much stricter level (e.g. 5σ).[27][28] For instance, the certainty of the Higgs boson particle's existence was based on the 5σ criterion, which corresponds to a p-value of about 1 in 3.5 million.[28][29]

In other fields of scientific research such as genome-wide association studies significance levels as low as 5×10−8 are not uncommon.[30][31]

Effect size

Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive[32] and not replicable.[33] To gauge the research significance of their result, researchers are therefore encouraged to always report an effect size along with p-values. An effect size measure quantifies the strength of an effect, such as the distance between two means in units of standard deviation (cf. Cohen's d), the correlation between two variables or its square, and other measures.[34]

See also


  1. ^ Redmond, Carol; Colton, Theodore (2001). "Clinical significance versus statistical significance". Biostatistics in Clinical Trials. Wiley Reference Series in Biostatistics (3rd ed.). West Sussex, United Kingdom: John Wiley & Sons Ltd. pp. 35–36.  
  2. ^ Cumming, Geoff (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York, USA: Routledge. pp. 27–28. 
  3. ^ Krzywinski, Martin; Altman, Naomi (30 October 2013). "Points of significance: Significance, P values and t-tests". Nature Methods (Nature Publishing Group) 10 (11): 1041–1042.  
  4. ^ Sham, Pak C.; Purcell, Shaun M (17 April 2014). "Statistical power and significance testing in large-scale genetic studies". Nature Reviews Genetics (Nature Publishing Group) 15 (5): 335–346.  
  5. ^ Johnson, Valen E. (October 9, 2013). "Revised standards for statistical evidence". Proceedings of the National Academy of Sciences (National Academies of Science).  
  6. ^ Altman, Douglas G. (1999). Practical Statistics for Medical Research. New York, USA: Chapman & Hall/CRC. p. 167.  
  7. ^ a b Devore, Jay L. (2011). Probability and Statistics for Engineering and the Sciences (8th ed.). Boston, MA: Cengage Learning. pp. 300–344.  
  8. ^ Schlotzhauer, Sandra (2007). Elementary Statistics Using JMP (SAS Press) (PAP/CDR ed.). Cary, NC: SAS Institute. pp. 166–169.  
  9. ^ Craparo, Robert M. (2007). "Significance level". In Salkind, Neil J. Encyclopedia of Measurement and Statistics 3. Thousand Oaks, CA: SAGE Publications. pp. 889–891.  
  10. ^ Sproull, Natalie L. (2002). "Hypothesis testing". Handbook of Research Methods: A Guide for Practitioners and Students in the Social Science (2nd ed.). Lanham, MD: Scarecrow Press, Inc. pp. 49–64.  
  11. ^ a b c Sirkin, R. Mark (2005). "Two-sample t tests". Statistics for the Social Sciences (3rd ed.). Thousand Oaks, CA: SAGE Publications, Inc. pp. 271–316.  
  12. ^ a b Borror, Connie M. (2009). "Statistical decision making". The Certified Quality Engineer Handbook (3rd ed.). Milwaukee, WI: ASQ Quality Press. pp. 418–472.  
  13. ^ Babbie, Earl R. (2013). "The logic of sampling". The Practice of Social Research (13th ed.). Belmont, CA: Cengage Learning. pp. 185–226.  
  14. ^ Faherty, Vincent (2008). "Probability and statistical significance". Compassionate Statistics: Applied Quantitative Analysis for Social Services (With exercises and instructions in SPSS) (1st ed.). Thousand Oaks, CA: SAGE Publications, Inc. pp. 127–138.  
  15. ^ McKillup, Steve (2006). "Probability helps you make a decision about your results". Statistics Explained: An Introductory Guide for Life Scientists (1st ed.). Cambridge, United Kingdom: Cambridge University Press. pp. 44–56.  
  16. ^ a b Cumming, Geoff (2011). "From null hypothesis significance to testing effect sizes". Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. Multivariate Applications Series. East Sussex, United Kingdom: Routledge. pp. 21–52.  
  17. ^ a b Poletiek, Fenna H. (2001). "Formal theories of testing". Hypothesis-testing Behaviour. Essays in Cognitive Psychology (1st ed.). East Sussex, United Kingdom: Psychology Press. pp. 29–48.  
  18. ^ a b Fisher, Ronald A. (1925). Statistical Methods for Research Workers. Edinburgh, UK: Oliver and Boyd. p. 43.  
  19. ^ a b c d Quinn, Geoffrey R.; Keough, Michael J. (2002). Experimental Design and Data Analysis for Biologists (1st ed.). Cambridge, UK: Cambridge University Press. pp. 46–69.  
  20. ^ a b Neyman, J.; Pearson, E.S. (1933). "The testing of statistical hypotheses in relation to probabilities a priori". Mathematical Proceedings of the Cambridge Philosophical Society 29: 492–510.  
  21. ^ Myers, Jerome L.; Well, Arnold D.; Lorch Jr, Robert F. (2010). "The t distribution and its applications". Research Design and Statistical Analysis: Third Edition (3rd ed.). New York, NY: Routledge. pp. 124–153.  
  22. ^ Meier, Kenneth J.; Brudney, Jeffrey L.; Bohte, John (2011). Applied Statistics for Public and Nonprofit Administration (3rd ed.). Boston, MA: Cengage Learning. pp. 189–209.  
  23. ^ Healy, Joseph F. (2009). The Essentials of Statistics: A Tool for Social Research (2nd ed.). Belmont, CA: Cengage Learning. pp. 177–205.  
  24. ^ McKillup, Steve (2006). Statistics Explained: An Introductory Guide for Life Scientists (1st ed.). Cambridge, UK: Cambridge University Press. pp. 32–38.  
  25. ^ Health, David (1995). An Introduction To Experimental Design And Statistics For Biology (1st ed.). Boston, MA: CRC press. pp. 123–154.  
  26. ^ Myers, Jerome L.; Well, Arnold D.; Lorch, Jr., Robert F. (2010). "Developing fundamentals of hypothesis testing using the binomial distribution". Research design and statistical analysis (3rd ed.). New York, NY: Routledge. pp. 65–90.  
  27. ^ Vaughan, Simon (2013). Scientific Inference: Learning from Data (1st ed.). Cambridge, UK: Cambridge University Press. pp. 146–152.  
  28. ^ a b Bracken, Michael B. (2013). Risk, Chance, and Causation: Investigating the Origins and Treatment of Disease (1st ed.). New Haven, CT: Yale University Press. pp. 260–276.  
  29. ^ Franklin, Allan (2013). "Prologue: The rise of the sigmas". Shifting Standards: Experiments in Particle Physics in the Twentieth Century (1st ed.). Pittsburgh, PA: University of Pittsburgh Press. pp. Ii–Iii.  
  30. ^ Clarke, GM; Anderson, CA; Pettersson, FH; Cardon, LR; Morris, AP; Zondervan, KT (February 6, 2011). "Basic statistical analysis in genetic case-control studies". Nature Protocols 6 (2): 121–33.  
  31. ^ Barsh, GS; Copenhaver, GP; Gibson, G; Williams, SM (July 5, 2012). "Guidelines for Genome-Wide Association Studies". PLoS Genetics 8 (7): e1002812.  
  32. ^ Carver, Ronald P. (1978). "The Case Against Statistical Significance Testing". Harvard Educational Review 48: 378–399. 
  33. ^ Ioannidis, John P. A. (2005). "Why most published research findings are false". PLoS Medicine 2: e124. 
  34. ^ Pedhazur, Elazar J.; Schmelkin, Liora P. (1991). Measurement, Design, and Analysis: An Integrated Approach (Student ed.). New York, NY: Psychology Press. pp. 180–210.  

Further reading

  • Ziliak, Stephen and Deirdre McCloskey (2008), The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, University of Michigan Press, 2009. ISBN 978-0-472-07007-7. Reviews and reception: (compiled by Ziliak)
  • Thompson, Bruce (2004). "The "significance" crisis in psychology and education". Journal of Socio-Economics 33: 607–613.  
  • Chow, Siu L., (1996). Statistical Significance: Rationale, Validity and Utility, Volume 1 of series Introducing Statistical Methods, Sage Publications Ltd, ISBN 978-0-7619-5205-3 – argues that statistical significance is useful in certain circumstances.
  • Kline, Rex, (2004). Beyond Significance Testing: Reforming Data Analysis Methods in Behavioral Research Washington, DC: American Psychological Association.
  • Nuzzo, Regina (2014). Scientific method: Statistical errors. Nature Vol. 506, p. 105-152 (open access). Highlights common misunderstandings about the p value.

External links

  • The article "Earliest Known Uses of Some of the Words of Mathematics (S)" contains an entry on Significance that provides some historical information.
  • "The Concept of Statistical Significance Testing" (February 1994): article by Bruce Thompon hosted by the ERIC Clearinghouse on Assessment and Evaluation, Washington, D.C.
  • "What does it mean for a result to be "statistically significant"?" (no date): an article from the Statistical Assessment Service at George Mason University, Washington, D.C.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from World Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.