What is Statistical Significance?

In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis

More precisely, a study's defined significance level, α, is the probability of the study rejecting the null hypothesis, given that it were true; and the p-value of a result, p, is the probability of obtaining a result at least as extreme, given that the null hypothesis were true. The result is statistically significant, by the standards of the study, when p < α.  

The significance level for a study is chosen before data collection, and typically set to 5% or much lower, depending on the field of study.

In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.

But if the p-value of an observed effect is less than the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population, thereby rejecting the null hypothesis.

This technique for testing the statistical significance of results was used in genetics as far back as the 18th century, and entered widespread use in other fields in the early 20th century.

The term significance does not imply importance here, and the term statistical significance is not the same as research, theoretical, or practical significance. 

For example, the term clinical significance refers to the practical importance of a treatment effect. Role in statistical hypothesis testing Statistical significance plays a pivotal role in statistical hypothesis testing.

It is used to determine whether the null hypothesis should be rejected or retained. The null hypothesis is the default assumption that nothing happened or changed. For the null hypothesis to be rejected, an observed result has to be statistically significant, i.e. the observed p-value is less than the pre-specified significance level.

To determine whether a result is statistically significant, a researcher calculates a p-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true. 

The null hypothesis is rejected if the p-value is less than a predetermined level, α. α is called the significance level, and is the probability of rejecting the null hypothesis given that it is true (a type I error). It is usually set at or below 5%. For example, when α is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%, and a statistically significant result is one where the observed p-value is less than 5%. 

When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution. These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution.

The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better. 

A two-tailed test may still be used but it will be less powerful than a one-tailed test because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used. 

The one-tailed test is only more powerful than a two-tailed test if the specified direction of the alternative hypothesis is correct. If it is wrong, however, then the one-tailed test has no power.

Limitations

Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive and not replicable. 

There is also a difference between statistical significance and practical significance.

A study that is found to be statistically significant may not necessarily be practically significant.

Effect size

Effect size is a measure of a study's practical significance. 

A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report an effect size along with p-values.

An effect size measure quantifies the strength of an effect, such as the distance between two means in units of standard deviation (cf. Cohen's d), the correlation coefficient between two variables or its square, and other measures.

Reproducibility

A statistically significant result may not be easy to reproduce. In particular, some statistically significant results will in fact be false positives. Each failed attempt to reproduce a result increases the likelihood that the result was a false positive.

Challenges

Overuse in some journals Starting in the 2010s, some journals began questioning whether significance testing, and particularly using a threshold of α=5%, was being relied on too heavily as the primary measure of validity of a hypothesis. 

Some journals encouraged authors to do more detailed analysis than just a statistical significance test. In social psychology, the Journal of Basic and Applied Social Psychology banned the use of significance testing altogether from papers it published, requiring authors to use other measures to evaluate hypotheses and impact.

Other editors, commenting on this ban have noted: "Banning the reporting of p-values, as Basic and Applied Social Psychology recently did, is not going to solve the problem because it is merely treating a symptom of the problem.

There is nothing wrong with hypothesis testing and p-values per se as long as authors, reviewers, and action editors use them correctly." Using Bayesian statistics can improve confidence levels but also requires making additional assumptions, and may not necessarily improve practice regarding statistical testing.

Redefining significance In 2016, the American Statistical Association (ASA) published a statement on p-values, saying that "the widespread use of 'statistical significance' (generally interpreted as 'p≤0.05') as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process".

In 2017, a group of 72 authors proposed to enhance reproducibility by changing the p-value threshold for statistical significance from 0.05 to 0.005. 

Other researchers responded that imposing a more stringent significance threshold would aggravate problems such as data dredging; alternative propositions are thus to select and justify flexible p-value thresholds before collecting data, or to interpret p-values as continuous indices, thereby discarding thresholds and statistical significance. 

Additionally, the change to 0.005 would increase the likelihood of false negatives, whereby the effect being studied is real, but the test fails to show it.