When do we use it? When there are:
a) Differences between conditions
b) One variable
c) Three conditions (or more)
d) Unrelateddesign
We will code it like this: AD13UIN. The “in” comes from interval data and means that the OneWay ANOVA test is parametric (i.e. uses interval data).
Hypothesis: three presentation rates of list of words would have an effect on recall scores.
Our raw dataset would look like the following table.
Condition 1: (slow rate)  Cond. 2: (medium rate)  Condition 3 (fast rate) 
8  7  4 
7  8  5 
9  5  3 
5  4  6 
6  6  2 
8  7  4 
From the above table, we calculate:

Condition 1 Means

Condition 2 Means

Condition 3 Means

Sum of Scores (SoS or Total) for Condition 1

Sum of Scores (SoS or Total) for Condition 2

Sum of Scores (SoS or Total) for Condition 3

Grand Total Score (sum of all scores)
The bolded bits are automatically calculated from the raw data.
Condition 1: (slow rate)  Condition 2: (medium rate)  Condition 3 (fast rate) 
8  7  4 
7  8  5 
9  5  3 
5  4  6 
6  6  2 
8  7  4 
Total = 43  Total = 37  Total = 24 
Means = 7.17  Means = 6.17  Means = 4 
We see that on average, different conditions (i.e. different presentation list rates) have different scores. But we need to check whether or not these differences are statistically significant. So we will use the OneWay ANOVA test.
Rationale of the OneWay ANOVA test (unrelated)
The goal in OneWay ANOVA test is to compare ratios of variance. The presentation rate is the source of the predicted differences between the 3 conditions, hence it is called betweencondition variance. Differences between the 3 conditions due to nonpredicted variables are called error variance.
It is predicted that the variance between conditions will be relatively large compared to the error variance. So, if there are random differences between the conditions, as stated by the null hypothesis, the betweencondition variance will be relatively small. If this was the case, the null hypothesis could not be rejected.
The calculation of the OneWay ANOVA test is relatively long so I will upload a scanned page with the stepbystep calculations soon.
The ANOVA Table
The ANOVA table allows you to check whether given your one/two tailed hypothesis, your F value and your two degrees of freedom values, the probability that the predicted differences found between conditions were likely to occur by chance.
Check the v1 and the v2 in the table. V1 refers to the dfbet value and the v2 refers to the dferror. The value in the intersection of both values is the F critical value. Our F has to be equal or larger than the F in the ANOVA table. The critical value is 3.68 and p < 0.05. So the probability that the differences found between conditions can occur due to chance is less than 5%, this enables us to claim that the differences are statistically significant and thus we can reject the null hypothesis. We can also check the second table (p < 0.01). We see that the critical value is 6.36. Our F is larger than 6.36 so the null hypothesis can be rejected as the probability that the differences found between conditions can occur due to chance is less than 1%.
Reminder: always check that the differences in the means between the 3 conditions. In our case, the data shows that there are differences in the recall scores for different presentation rates. OneWay ANOVA only tells you whether there are significant differences among the experimental conditions, thus OneWay ANOVA only tackles twotailed hypotheses.