A while back Alex Reinhart, a statistics instructor and PhD student at Carnegie Mellon University, was working on a guide for doing statistics right. The goal was to teach through anecdotes of statistics done wrong, from statistical significance and p-values to regression and confounding factors.
Statistics Done Wrong is a book now. If you analyze data with any regularity but aren’t sure if you’re doing it correctly, get this book. It’s a concise guide with interesting examples and a light, easy-to-read tone.
Here’s the pitch, which sounds about right:
Statistics Done Wrong is a pithy, essential guide to statistical blunders in modern sceince that will show you how to keep your research blunder-free. You’ll examine embarrassing errors and omissions in recent research, learn about the misconceptions and scientific politics that allow these mistakes to happen, and begin your quest to reform the way you and your peers do statistics.
This reminds me of one of my first assignments as a grad student. The professor gave us a paper published in the scientific journal Nature, and told us to find things wrong with the analysis. There were so many holes. The data was spotty, there were illogical assumptions about data collection, and this led to an eyebrow-raising analysis. However, if you didn’t know any better, you’d think the results were impressive.
I don’t think there was anything malicious going on. It was just that the researchers (and the reviewers) didn’t know any better.