**Confidence intervals**

There are two types of confidence intervals. One related to the mean of an interval variable and the other related to the percentage of a categorical variable (a nominal variable or an ordinal variable). *Here only the former type will be covered.*

Confidence intervals for means

The goal is to capture the true value/effect within some intervals. When repeatedly taking equally-sized random samples of a population, the mean of the samples will tend to get closer and closer to the population mean. *The standard deviation of these sample means is called standard error.* From the mathematical properties of the normal distribution, we know that about 68% of the sample means will fall within 1 standard error of the population mean. Thus, given a random sample of the population, there is a 68% probability that the population mean will be found within 1 standard error of the of the sample mean.

And similarly, from the mathematical properties of the normal distribution, we know that about 95% of the sample means will fall within 2 standard error of the population mean. Thus, given a random sample of the population, there is a 95% probability that the population mean will be found within 2 standard error of the of the sample mean.

These probabilities of the ranges where the population mean will fall are called confidence intervals. They show the probability of a margin related to the mean of a sample. Confidence intervals can be visualised with error bars. Summary: confidence intervals give us the estimated range of values for a population parameter, a precision estimate (indicated by the width of the confidence interval) and a statistical significance (if the confidence of interval does not cover the null value, it is significant at the 0.05 level – the null value is the value of a factor in the sample that is strongly thought not to exist in the population, such as a ratio of 1).

**Effect sizes**

The effect sizes tell us how strong/large the difference/relationship between the relevant variables is. Even if you find a highly statistically significant in the difference/relationship between some variables, if the difference/relationship is weak, it is not relevant/valid/meaningful. Normally, measures of effect size like the Pearson’s coefficient take values (ignoring the positive/negative signs) between 1 (strong effect) and 0 (no effect).

### Like this:

Like Loading...

*Related*