When do we use it? When there are:

a) **R**elationships between variables

b) **C**orrelation between variables

We will code it like this: PRC.

Unlike experimental statistical tests where it is predicted that there will be differences among scores, Pearson makes predictions about the nature of the association (positive/negative, weak/strong/none) between variables.

Another important bit about correlational statistics like Pearson is that both variables are continuum rather than discrete. This means that interval data can be used in Pearson.

Correlations can be negative (inversely proportional) or positive (directly proportional) thus they are linear, and strong (r > 0.5) or moderate (r <= 0.5 and r > 0.3) or weak (r <= 0.3).

Remainder: correlation does not imply causation. Nice reading about this here.

**Final thoughts about Pearson**

A coefficient weaker than “weak” is not relevant even if it is statistically significant.

Even when a significant correlation is found between two variables, there are often more than one explanation for the correlation.

### Like this:

Like Loading...

*Related*

[…] We need to know about the relationship of every predictor to the criterion. Thus we apply the Pearson’s correlation test to every one of the relationships. The result is a matrix of correlations described by the […]