• Statistical significance: As its name suggests, statistical significance refers to the probability that an observed difference in the sample is not due to chance. It forms part of the hypothesis testing method mentioned earlier. Normally, the statistical significance is set at 5% or 1% and is expressed as p < 0.05 or p < 0.01. This means that in order to reject the possibility than an event is not due to chance, the probability of that event needs to be below 0.05 or 0.01.
• Type I error: when a test claims a significant difference in the sample with p = 0.023, we cannot know whether the difference found in the sample reflects a difference that exists in the population the sample was taken from. There exists a 2.3% probability that the difference in the sample is due to chance. Whenever it is claimed that a difference exists in the population when it does not exist, we call that claim a Type I error. Type I errors can be managed by setting appropriate significance levels (α = 0.05 or α = 0.01). The significance levels are symbolised by the character “α”. So by setting a significance level of 5%, we set the risk of commiting a Type I error at 5% or less. The lower the significance level, the lower the risk of a Type I error. On the other hand, when dealing with small sample sizes, it might be a good idea to raise the significance level. Worth noting that setting up the significance level is a subjective criteria that depends on the goal of the researcher. Type I errors are one of the risks taken when raising the significance level.
• Type II error: it is the opposite of a Type I error. Whenever it is claimed that a difference does not exist in the population when it does exist, we call that claim a Type II error. Type II errors are one of the risks taken when lowering the significance level. You might find a significant difference in your sample that happens to reflect a real difference in the population with p = 0.034 but since you had α = 0.01, you conclude that the difference does not reflect a real difference in the population.

So by raising the significance level too much, you risk a Type I error and by lowering it too much, you risk a Type II error.

• Statistical power: it is the ability to detect a significant difference in a sample if it exists in the population. So a statistical power of 0.40 means that there is a 40% probability that the test will find a difference in a sample if it exists in the population. It also means that there is a 60% probability that the test won’t find a difference even if it exists. In other words, there is a 60% probability of a Type II error. The statistical power of a given test is 1- β where “β” or beta refers to the probability of committing a Type II error. As a rule of thumb, the statistical power of a given test should be at least 0.8, thus having a 20% probability of committing a Type II error. In order to calculate the statistical power of a test, we need the sample size the test will be applied to, the α value and the effect size (the size of the difference/relationship detected).