If in linear regression we had a predictor X and the criterion (predicted variable) Y. In multiple regression, we have several predictors (X1, X2, X3, etc) and a single criterion (predicted variable) Y.
The goal is to investigate the extent to which the predictor variables can predict the criterion variable Y. This goal is divided in 3 sub-goals.
To assess the extent to which the predictor scores are associated with the criterion variable scores. We need to know about the relationship of every predictor to the criterion. Thus we apply the Pearson’s correlation test to every one of the relationships. The result is a matrix of correlations described by the Pearson’s coefficient. The ideal result is to find significant correlations between the predictor scores and the criterion variable scores. The sum of all the correlations is called multiple R.
To assess the (statistical) significance of the variance on the criterion variable produced by the predictors (i.e. to assess the significance of the predicted variance). In other words, we want to see if there is enough predicted variance compared to unpredicted variance. The square multiple R also called R-square is used here because it is a measure of the predicted multiple regression that can be explained by the predictors.
To assess the variance on the criterion variable produced by individual predictors (i.e. to check the extent of the contribution of each individual predictor to the predicted variance on the criterion variable).
In order to check the contribution of each predictor, we need to put the predictors on the same scale so they can be measure on the same scale. Another way of describing this is saying that we need to standardise the predictor scores.