It is also called paired samples t test because the test pairs the samples.

When do we use it? When there are:

a) Differences between conditions

b) One variable

c) Two conditions

d) Related-design

We will code it like this: TD12Rin. The “in” comes from interval data and means that the t test is parametric (uses interval data).

Hypothesis: participants would recall more words from a simple text (Condition 1) than they would from a complex text (Condition 2).

So we not only predict differences among the means in the two conditions. We also predict that the differences will be in a way such that Condition 2  will have higher mean scores than  Condition 1.

Our raw dataset would look like the following table.

 Participants Condition 1 (simple text) Condition 2 (complex text) 1 10 2 2 5 1 3 6 7 4 3 4 5 9 4 6 8 5 7 7 2 8 5 5 9 6 3 10 5 4

From the above table, we calculate:

• Condition 1 Means

• Condition 2 Means

• Difference Column (Cond1 – Cond2) called d

• Sum of Differences called SoD

• Squared Differences Column called d²

• Sum of Squared Differences called SoSD

The bolded bits are automatically calculated from the raw data.

 Participants (N) Condition 1 (Simple text) Condition 2 (complex text) d d² 1 10 2 8 64 2 5 1 4 16 3 6 7 -1 1 4 3 4 -1 1 5 9 4 5 25 6 8 5 3 9 7 7 2 5 25 8 5 5 0 0 9 6 3 3 9 10 5 4 1 1 Means: 6.4 3.7 SoD: 27 SoSD: 151

Note: the SoD takes into account negative signs.

We see that on average, participants did recall more words from the simple text than from the complex text. But we need to check whether or not these differences are statistically significant. So we will use the t test.

Rationale of the t test (related)

The predicted variance is calculated by summing up the score differences between conditions (SoD) and is expressed as a proportion of the total variance. If the differences between the scores are due to random factors (as stated by the null hypothesis), the variance due to the experimental manipulation (SoD) would be relatively small. And then the null hypothesis could be rejected.

Df (degrees of freedom) = N – 1

means “Sum of” (e.g. ∑d means “SoD” or “Sum of Differences”).

The T Table

The T Table enables you to check whether given your one/two tailed hypothesis, your T value and your sample size, the probability that the differences found between conditions were likely to occur by chance. Reminder:  the significance levels range from 5% to 1%.

We check the one-tailed table and we check against our df (9) and t (2.89) values in the 9 row. The t has to be equal or larger than the values in the T table. The critical value for T with our sample size is 2.2821. Our T is more than 2.2821 so the probability that the differences found between conditions can occur due to chance is less than 1%, this enables us to claim that the differences are statistically significant and thus we can reject the null hypothesis.

Reminder: always check that the differences are in the direction predicted by the one-tailed hypothesis. In our case, the data shows that participants scored higher in Condition 2 (simple text) than they did in Condition 1 (complex text) and the differences are statistically significant (p < 0.01).