I've been reading papers on how people learn statistics (and thoughts on teaching the subject) and came across the frequently-cited work of mathematical psychologists Amos Tversky and Daniel Kahneman. In 1972, they studied statistical misconceptions. It doesn't seem much has changed. Joan Garfield (1995) summarizes in How to Learn Statistics [pdf].

**Representativeness:**

People estimate the likelihood of a sample based on how closely it resembles the population.

You can't always judge how likely or improbable a sample is based on how it compares to a known population. For example, let's say you flip a coin four times and get four tails in a row (TTTT). Then you flip four more times and get HTHT. In the long run, heads and tails are going to be split 50/50, but that doesn't mean the second sequence is more likely.

Similarly, a sequence of ten heads in a row isn't the same as getting a million heads in a row.

**Gambler's fallacy:**

Use of the representative heuristic leads to the view that chance is a self-correcting process.

The history boards at roulette tables mean nothing. They're just for show. Just because a red hasn't come up in a while doesn't mean the roulette wheel is due for a red soon. Each spin is independent of the spins that came before it.

**Base-rate fallacy:**

People ignore the relative sizes of population subgroups when judging the likelihood of contingent

events involving the subgroups.

You have to consider the base population for comparison. Maybe a company is comprised of 80 percent men and 20 percent women. If your base is the US population, you might consider that inequality, but what if the applicant breakdown was 90 percent men and 10 percent women? In the latter case, a higher percentage of women than men were actually hired.

**Availability:**

Strength of association is used as a basis for judging how likely an event will occur.

Just because some percentage of your friends are designers doesn't mean that the same percentage of people are designers elsewhere (obviously). Or the example that Garfield uses: a ten percent divorce rate among people you know isn't necessarily the same nationwide or globally.

**Conjunction fallacy:**

The conjunction of two correlated events is judged to be more likely than either of the events themselves.

The common example from Tversky and Kahneman:

"Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations." A group of people were asked if it was more probable that Linda was a bank teller or a bank teller active in the feminist movement (a sign of the times this poll was taken).

Eighty-five percent of respondents chose the latter, but the probability of two things happening together is always less than or equal to the events occurring individually.

Notice that there's still not much math involved in these examples. It's logic that plays into thinking like a statistician without the math (with statistical foundations). You can get a lot done just by thinking critically about your data.

Regarding the “base-rate fallacy”, if the number of people applying for a position is so skewed there may in fact be something important going on there. For example, an employer which only recruited candidates at all men’s schools should not be surprised that there is an overwhelming number of male applicants.

Similarly in the old South there was known discrimination against local blacks. A hotel might have a sign “Whites Only” which would discourage local blacks from even asking for a room. On the other hand, wealthy black foreign dignitaries were welcomed even in the Jim Crow era. As a result the hotel might boast of having given a room to every black who showed up at the front desk. On the other hand, if a local black had actually tried to come in, they might not have actually made it to the front desk.

I think you missed the point of the article, it isn’t suggesting that everything is a fallacy.

The example for the conjunction fallacy appears to fall foul of the base-rate fallacy. If a person was selected at random from the general population then it is more likely they would belong to the subset of bank tellers than of feminist bank tellers. However, what we have here is: given this description which of these two labels is most appropriate. The description contains data that would suggest Linda might be a feminist it contains no data to suggest she might be a bank teller. So really the choice is between feminist and non-feminist.

It is the reasoning of the 15% who pumped for non-feminist that should be boggled at:)

Agreed. Most people would ignore the bit about a bank-teller because it is totally extraneous to the preceding text. … The statistical problem may be real, but this example is not very illustrative. Seems more like a bait-and-switch problem than a statistical issue.

@Aaron, I think that’s the point they’re trying to make, that people ignore the common part of both choices (ie the bank teller) when, in fact, it’s a very important part of the question. It’s pointing out a very valid issue about statistical (or even logic) literacy.

I think the point Nigel and Aaron might be making is in the wording of the question. The way it’s presented, the implication of the options is that she is either a bank teller and NOT in the feminist movement or a bank teller and is in the feminist movement.

@Nigel, I believe it’s because the choices are: 1) Is Linda a bank teller, or 2) Is Linda a bank teller AND a feminist. The “AND” is the killer here. If you remove the labels, it’s easier to understand….

Is it more probable that:

1) Linda is A

2) Linda is A and B

So it’s obviously more probable that she is A, than both A and B. Does that make sense? Since we don’t know if Linda is a bank teller OR a feminist, it’s more likely that she is a bank teller, or a feminist, but not both a bank teller AND a feminist.

Kim, I understand that that is how the question is supposed to be understood to get the ‘correct’ answer but I don’t accept that that is how a reasonable aught to be expected to understand it.

If the options were reverse

Linda is a feminist or

Linda is a feminist and a back teller

I suspect this ‘fallacy’ would vanish.

The AND is definitely all important – the layman’s guess given the information about Linda might be that there’s an 85% chance that she’s a feminist (in set B) but only a 15% chance that she’s a bank teller(in set A). Hasty or untrained thinking could easily lead to the wrong answer – that 85 is a much bigger number than 15. The joint probability is multiplicative though – Pr(A and B) = Pr(A)*Pr(B). Since probability values lie between 0 and 1, Pr(A and B) will always be less than or equal to Pr(A).

@Nigel, I don’t think the fallacy would vanish. You’re using persuasion in this example to get people to expect something about Linda. In this case, “Linda is progressive. Linda is liberal. What are the chances Linda is a feminist?” There’s still no way of knowing if Linda is a feminist, but you’ve *lead* people to believe this is likelier than not. So you’ve persuaded them to pick the right choice, but for the wrong reason.

Kim,

I am not trying to convince people that Linda is anything I am applying plausible reasoning as developed by Polya and Jaynes. The evidence that Linda belongs to either of the two sets is very weak but is slightly stronger of one than the other. But the argument for the fallacy is that people will typically – i.e. better than chance chose the composite set and that claim cannot be justified on the evidence, there are other interpretations of the result.

Totally with Nigel on this one. I don’t think either of us are saying that the principle described is untrue, but rather the example given does not provide excellent support. Respondents will interpret the question in ways not intended by the writer of the question. In this example it seems entirely plausible that the respondent may have misinterpreted the first option to mean that she is a bank teller AND not a feminist. In their mind they were trying to decide which was more probable Teller AND not feminist or Teller AND feminist.

This really is an odd question for most people. When faced with something that seems out of whack people will try to fix it or make assumptions about what was really meant, which in turn can lead to massive measurement error.

Your examples for Availability are properly associated with Representativeness. For example, if the population’s divorce rate is different from your friends’ divorce rate, your friends are not representative of the population.

Anytime I get to thinking I actually know something, I read this list of cognitive biases:

http://en.wikipedia.org/wiki/List_of_cognitive_biases

My favorite is the “bias blind spot” – the tendency to see oneself as less biased than other people…

Thanks for sharing,

Ben

There’s another one I’ve started seeing among people with a bit of statistics background (perhaps just enough to be dangerous). Not sure if it has a name, but it’s the tendency to consider only the expected value of your several options and bet on the one with the highest expected value, without also thinking about variability.

Yes, the highest expected value will be the best bet in the long run — but that might be a VERY long run, if it’s a very rare event (with a high payoff). In the short run, you might be safer betting on something that’s likely to win you a smaller amount.

Sounds like your friends are very rational! As a former student and admirer of Richard Thaler (a frequent collaborator of Kahneman & Tversky) I remember talking specifically about the case where a person would be willing to accept a bet with a higher expected value for multiple bets but not single bets (due to “myopic loss aversion”). Of course it all depends on your personal level of risk aversion, but maybe these people consider the long run to be their entire lifespan, and that if they accept bets with the highest expected value throughout their lives, they’ll end up ahead by the end?

Anyway, if you’re interested, check out the Introduction in this paper by Thaler & Benartzi, as they discuss very similar situations:

http://independent401kadvisors.com/library_articles/RiskAversionorMyopia.pdf

Proofiness: The Dark Arts of Mathematical Deception – a great book surrounding this topic

thakns for the tip. love the colbert-esque title, too.

Regarding the “Linda” question: if I encountered this on a test, I would suspect the author of the question of failing to write what they meant. Particularly on a standardized exam, a question phrased that way suggests a dichotomy, not an overlap, and answering correctly can be expected to lower your scores.

This is not limited to math and logic, however: http://thehappyscientist.com/blog/problems-floridas-science-fcat-test