*Photo by Leo Reynolds*

Undoubtedly you've been seeing a lot of headlines about the stuff going on in Iran. If you haven't, you must be living under a rock.

One of the huge issues right now is whether or not fraud was involved in the election of Mahmoud Ahmadinejad.

Wait a minute. Voting? Results? Numbers?

Oh, we have to look at the data for this one. Bernd Beber and Alexandra Scacco, Ph.D. candidates in political science at Columbia University, discuss in their Op-ed for the Washington Post:

The numbers look suspicious. We find too many 7s and not enough 5s in the last digit. We expect each digit (0, 1, 2, and so on) to appear at the end of 10 percent of the vote counts. But in Iran's provincial results, the digit 7 appears 17 percent of the time, and only 4 percent of the results end in the number 5. Two such departures from the average -- a spike of 17 percent or more in one digit and a drop to 4 percent or less in another -- are extremely unlikely. Fewer than four in a hundred non-fraudulent elections would produce such numbers.

Why does this matter? Well humans are bad at making up sequences of numbers. Made-up number sequences look different from real random sequences (e.g. numbers from McCain/Obama). Beber and Scacco go on to describe the details of why the data look fishy. For those of us who've read Freakonomics will recognize the discussion.

The result?

The probability that a fair election would produce both too few non-adjacent digits and the suspicious deviations in last-digit frequencies described earlier is less than .005. In other words, a bet that the numbers are clean is a one in two-hundred long shot.

Now what?

[via Statistical Modeling]

I actually prefer Professor Mebane’s analysis, which can be found here. he has been updating results daily over the last week. He looks at the

seconddigit for Benford anomalies. Previous work of his has exmained why first digit distributions can be naturally different from Benford results, a result recently referenced by fivethirtyeight.com, which also had another analysis of the 7s argument. The paper is fairly accessible, but I’ve also put up some visualizations of his results here. Also available there is the full data used for the analysis and the R files used–I invite people here to find other ways to show the results, or find other aspects to the data.Hi Nathan,

Somewhat related to this, I just finished an election map of Iran. I found it odd that I didn’t see anything like it in the news, so I decided to make one myself. Geographic visualization isn’t my usual thing, but the data is really compelling.

Maybe the Iranian ballot-stuffers used FlowingData text to generate those false election results.

I took the text from this post, then looked at the last digit of the ASCII code for each column (ie: the ones-column). Here is the histogram for that data:

00. |||||||||||||||||||||||||||||||||

01. ||||||||||||||||||||||||||||||||||||||||||||||||

02. ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||

03. |||||||

04. |||||||||||||||||||||||||

05. ||||||||||||||||||||||||||||||||

06. |||||||||||||||||||||||

07. ||||||||||||||||||||||||||||

08. ||||||||||||||||

09. |||||||||||||||||

Clearly, there are too many characters that end with a code of “2” in this post, which makes me suspect that it wasn’t generated from a valid sample.

Hmmmm….

no comment.

Meant to be humorous, I know, but alphabetic characters are not normal data (in the mathematical meaning of the word ‘normal’). The last digit or two of elections should, in theory, be normal data.

Pingback: Vote totals from Iran were statistically improbable | MNpublius.com

Visio’s critique isn’t on the right track; the authors are right that the phenomena they’ve identified is rare (although they get the probability wrong – it’ll happen in 0.15% and not 0.5% of elections).

However, go look at the authors’ previous work on elections in Nigeria and, assuming you have ever honestly used statistical tests, you’ll see the flaw in their analysis. They apply a different test to the Nigerian data; in fact, the psychological studies they cite in that work directly belie the Iranian data. They posit that a fraudulent set of random numbers will have too many 1s, 2s, and 3s and not enough high digits. Yet here they see an excess of 7s and a paucity of 9s – neither number is discussed in their earlier work. The nonadjacent numbers test was discussed, but not applied, to the Nigerian data – why is it used here?

The most likely answer is that the authors here looked at every possible thing they’ve identified as diagnostic of a fraudulent election, picked the ones that occurred, and calculated the probability of that occurrence. As a statistical test of fraud, that’s completely worthless.

Ask yourself: had this phenomena occurred in the penultimate digit instead of the last digit, would the authors have written the same article?

In fact, in the 2008 Obama/McCain election (set of 102 state totals form Wikipedia) the 20% of the numbers are 7 and only 5% of them are 8 in the penultimate digit. This is rarer than the 7s and 9s observation here.

If you look at any set of 116 similar, random numbers, you’ll find an apparently paradoxical pattern; it would be much, much more unlikely, in fact, not to see such a pattern. Would fraud be less likely if each digit were used exactly 11 or 12 times out of the 116 numbers? That’s a much less likely outcome than having one digit occur 17% of the time and another occur 4% of the time.

Just because the numbers don’t fit one’s tidy little theory doesn’t make them false. Just because the chances aren’t great about something doesn’t mean it didn’t happen.

You say, “Made-up number sequences look different from real random sequences (e.g. numbers from McCain/Obama).” It’s hard to decide whether the e.g. refers to made-up seq’s or real seq’s but if you think McCain/Obama was real it shows how far into anal darkness your head is.