It is also called independent t test.

When do we use it? When there are:

a) Differences between conditions

b) One variable

c) Two conditions

d) Unrelated-design

We will code it like this: TD12Uin. The “in” comes from interval data and means that the t test is parametric (uses interval data).

Hypothesis: more words would be recalled from a simple text (Condition 1) than they  from a complex text (Condition 2).

So we not only predict differences among the means in the two conditions. We also predict that the differences will be in a way such that Condition 2  will have higher mean scores than  Condition 1.

Our raw dataset would look like the following table.

 Condition 1: (simple text) Condition 2: (complex text) 10 2 5 1 6 7 3 4 9 4 8 5 7 2 5 5 6 3 5 4

From the above table, we calculate:

• Condition 1 Means

• Condition 2 Means

• Condition 1 Squared Scores

• Condition 2 Squared Scores
• Sum of Scores (SoS) for Condition 1

• Sum of Scores (SoS) for Condition 2

• Sum of Squared Scores (SoSS) for Condition 1

• Sum of Squared Scores (SoSS) for Condition 2

The bolded bits are automatically calculated from the raw data.

 Condition 1 (simple text) Condition 1 Squared Scores Condition 2 (complex text) Condition 2 Squared Scores 10 100 2 4 5 25 1 1 6 36 7 49 3 9 4 16 9 81 4 16 8 64 5 25 7 49 2 4 5 25 5 25 6 36 3 9 5 25 4 16 Total = 64 SoSS = 450 Total = 37 SoSS = 165 Means = 6.4 Means = 3.7

We see that on average,  more words were recalled from the simple text than from the complex text. But we need to check whether or not these differences are statistically significant. So we will use the t test.

Rationale of the t test (unrelated)

In the independent t test, the predicted variance is calculated as the difference between the means of the two conditions and is expressed as a proportion of the total variance. If the differences between the scores are due to random factors (as stated by the null hypothesis), the variance due to the predicted differences between the two conditions would be relatively small. If so, the null hypothesis could be rejected.

M1 = Means for Condition 1

M2 = Means for Condition 2

SoSS1 = Sum of Squared Scores for Condition 1

SoSS2 = Sum of Squared Scores for Condition 2

Cond1 = Condition 1 Total Scores

Cond2 = Condition 2 Total Scores

N1 = Condition 1 Sample size

N2 = Condition 2 Sample size

The T Table

The T Table enables you to check whether given your one/two tailed hypothesis, your T value and your sample size, the probability that the differences found between conditions were likely to occur by chance.

We check the one-tailed table and we check against our df (n1 – 1 + n2 – 1 = 18) and t (3.096) values in the 9 row. The t has to be equal or larger than the values in the T table. The critical value for T with our sample size is 1.734. Our T is more than 1.734 so the probability that the differences found between conditions can occur due to chance is less than 1%, this enables us to claim that the differences are statistically significant and thus we can reject the null hypothesis.

Reminder: always check that the differences are in the direction predicted by the one-tailed hypothesis. In our case, the data shows that more words were recalled in Condition 2 (simple text) than in Condition 1 (complex text) and the differences are statistically significant (p < 0.05).