Linking Out Loud #5: Examples first!

By David Laing

Hello!

This week I’m sharing what I believe is the most powerful and least appreciated strategy for crafting a clear explanation. I first encountered this strategy in Timothy Gowers’ 2007 blog post: My favourite pedagogical principle: examples first!

As Gowers says, “The idea is more or less there in the title: present examples before you discuss general concepts.” He shows two explanations of the same concept, but varies the order of the exposition so that examples appear earlier or later.

Gowers writes for an advanced mathematical audience, so the examples he provides (which explain the concept of a field—a mathematical structure) would likely be inaccessible to most people. But the principle he recommends is applicable to all topics. Consider the following explanations, written by me, of the statistical concept of a p-value:

Explanation #1.

A p-value is a metric obtained from a statistical test. It represents the probability that random chance would cause results as extreme as those observed in a sample. 

Scientists are interested in detecting non-random effects, so they interpret low p-values (typically less than 0.05, or 1/20) as being indicative of a notable result. A high p-value, by contrast, suggests that the results obtained may well be caused by chance.

If you flip a fair coin 100 times, and 53 of those times it lands heads, then the p-value would be quite high because the result is close to what you would expect if the coin is fair.

Explanation #2. 

If you flip a coin 100 times, and 53 of those times it lands heads, then you don’t have much reason to suspect that the coin is unfair; the result is close to what you would expect due to random chance. 

Another way to say this is that the probability of seeing a result this extreme, if the coin is fair, is quite high. This type of probability—the probability that random chance would cause results as extreme as those observed in a sample—is referred to as a p-value.

Scientists are interested in detecting non-random effects, so they interpret low p-values (typically less than 0.05, or 1/20) as being indicative of a notable result.

Both explanations may seem acceptable, especially if you are already familiar with p-values. But if the concept is new to you, I would guess that you prefer the second explanation, because it starts by grounding you in details and experiences that are familiar to you—flipping a coin, wondering whether the coin is fair.

(In fairness, you may prefer the second explanation simply because you read it immediately after reading the first, so you have even more context with which to triangulate on the concept. But you get the point.)

The reason we often default to providing abstract definitions first is that we are afflicted by a cognitive bias known as the curse of knowledge: the better you know something, the harder it is to imagine what it would be like not to know it.

Of course, sometimes your audience does share much of your background knowledge, which allows you to take shortcuts in your communication. These shortcuts often take the form of abstract definitions or conceptual discussion.

So it’s not that examples should always come first, no matter what. It’s that the more complicated the concept, or the more unfamiliar the audience, the earlier you should provide examples.

Yours exemplarily,

David