• Proving the Non-experts Wrong

    August 7, 2007  |  Statistics

    UPDATE: I found the essay! Programmers Need To Learn Statistics Or I Will Kill Them All by Mr. Zed Shaw

    There was this online essay that I read by a guy in the computer science/electrical engineering field who totally loves statistics. He read text books, and truly spoke like someone who respects data. I thought I bookmarked it, but now have no clue where the heck it is. Argh :(. If anyone knows who I'm talking about, please tell me!

    He worked with a company where everyone thought they "knew" statistics. Automated reports would give them numbers, and they'd fully trust them. That was statistics to the computer engineers. Crunch some numbers and see what the software gives me. As a result, these engineer-types really pissed off the author of the article. Continue Reading

  • TED Talk: What do we really know about the spread of AIDS?

    July 13, 2007  |  Statistics

    In her TED talk, Emily Oster challenges our conception of AIDS and suggests other covariates that we need to look at (e.g. export volumes of coffee). Until we get out of the mindset that poverty and health care are the only causes/predictors of AIDS, we won't be able to find the best way to fight the disease. Another great use of data.

    I do have one small itch to scratch though. Emily had a line plot that shows export volumes and another line, on the same grid, of HIV infections, both over time. It reminds me of the plots that Al Gore uses with carbon dioxide levels and temperature. Anyways, using the plot, Emily suggests a very tight relationship between export volumes and HIV infections. Isn't export volume pretty tightly knit to poverty? I don't know. She's the economist, so she would know (A LOT) better than me. I guess I just wish she talked a little bit about the new and different data she has that compels us to change our conceptions.

  • Making Public Data Public

    July 11, 2007  |  Data Sources

    As Jon Udell has mentioned, there's a ton of data online, but it's not often we can find it, often hidden in the deep, dark basement of some website. He has proposed that people book mark public datasets on del.icio.us under the tag "publicdata". I think this is a great idea. In turn, you can subscribe to the feed with the url http://del.icio.us/tag/publicdata.

    I've been doing this already for a while, but I had been just tagging with "data". So I'm going to join in on the party and start tagging with publicdata, and I hope others will too. Until sites like Many Eyes and Swivel get more wind beneath their wings, I think it's necessary.

  • Finding Weirdness in Temperature Data

    July 9, 2007  |  Mistaken Data

    wunderplot500

    After parsing Weather Underground pages to grab temperature data, it's time to look at the data. Can't download all that data and not do anything with it!

    First off, in my initial pass of my parsing script, I accidentally cut the month range short, so I didn't get any data for December from 1980 to 2005. It should be noted that these plots don't show this missing data. Um, there's no axes or labels either. Sorry, I got a little lazy, but that's not the point now anyways.
    Continue Reading

  • Juice TESTING in Competitive Sports

    July 8, 2007  |  Mistaken Data

    Juice testing

    It's easy to see how Statistics got this bad wrap because it's so easy to lie with data, charts, and graphs. Sometimes it's on purpose -- someone might try to present "good" results that actually suck. Sometimes it's accidental -- someone might have misread or didn't read the documentation that came with the data. In the case of Swivel's most recently featured graph, it was the latter. A case of mistaken identity so to speak.

    The data about doping tests in sports came from here. Now the graph on Swivel would have you believe that the data represent the number of doping cases found in each sports; however, according to the USADA report, the data is actually the number of tests the association conducted inside and outside competition during the first quarter of this year. The report contains no data on the USADA's findings.

    What We Learn

    What can we learn from this? It's great to visualize data, but you have to be careful. Read the documentation. Find out what the data is about, because without context, the visualization or any findings are practically useless. Statistics isn't to lie. In fact, it's the exact opposite. Statistics came about and exists today to reveal the truth.

  • Immigration Data Available from Homeland Security

    July 5, 2007  |  Data Sharing

    There was a Sharp Rise Seen in Applications for Citizenship, as reported in The Times today, and of course there was a graphic to complement that article that showed the rise in applications over the years as well as a by-country breakdown for 2006.

    Surge Seen in Applications for Citizenship

    Graphics in The Times always site the source, which was Department of Homeland Security in this case. I thought, "Do they have some kind of source who they actually call to get this data?" Thinking such a thing, I feel pretty dumb now. In fact, I always see that source on all of the graphics, and have just assumed that there was some connection between The Times and the source.

    Wrong.

    So lazy me finally decided to look into things, and you know what, the Department of Homeland Security has a whole section on their website for Immigration Statistics. There are freely available spreadsheets, reports, publications, and even a little something on data standards and definitions, prepared by none other than the -- Office of Immigration Statistics. Very pleased.

    It's kind of sad that this is just now news to me, but better now than never, eh?

  • Social Data Analysis by the Swarm

    July 4, 2007  |  Social Data Analysis

    swarm500

    Swarm Theory, by Peter Miller, talks about how some animals, as individuals, aren't smart, but as a group or a swarm, they can do amazing things. The above is a flock of starlings that can change shapes even though no single bird is the leader.

    Can we apply swarm theory to social data analysis? As individuals, we might not be able to hold onto or understand a dataset, but as a group, we can come at a dataset from different perspectives, look at very small parts, and then as an end result -- extract real, worthwhile meaning.

    That's how swarm intelligence works: simple creatures following simple rules, each one acting on local information. No ant sees the big picture. No ant tells any other ant what to do. Some ant species may go about this with more sophistication than others. (Temnothorax albipennis, for example, can rate the quality of a potential nest site using multiple criteria.) But the bottom line, says Iain Couzin, a biologist at Oxford and Princeton Universities, is that no leadership is required. "Even complex behavior may be coordinated by relatively simple interactions," he says.

    It reminds me of that common saying, or maybe it's a quote, about how if you put a bunch of monkeys in a room with typewriters, you'll eventually get the works of Shakespeare via the magic of probability. While the whole monkey thing is a bit far-fetched, swarm theory is certainly worth my attention.

  • CitiStat: Injured on Duty “Data”

    July 2, 2007  |  Data Sharing

    CitiStat Buffalo

    I was flipping through the channels the other night and came across a televised CitiStat meeting for June 1. A bit of a coincidence since I happened to be looking at the CitiStat website earlier that day. What's CitiStat, you ask? Well it's like a spin-off of CompStat, a program in NYC and LA, that makes police officials accountable for their actions by looking at data -- number of homicides, where they happened, what's being done, etc. CitiStat, in Buffalo, is the same thing, but for the Police, Fire Department, and whatever else they can think of, and seemingly not quite as reputable.

    Anyways, they were talking to some city official about fire department employees that were IOD, um, that's injured on duty (but I must've heard IOD like a billion times). There was some discrepancy on the definition of IOD. As a result, the data was worthless. The police commissioner spoke as well with his own IOD numbers. After that, there was a lot of arguing and as a result, a meeting was agreed upon. Well, not really. They agreed that they would schedule some meeting, but it's been a year of "What is an IOD?" Pretty sure that won't be settled for a while.

    They were also able to agree that the number of IODs was somewhere between 50 and 200. Yay.

    So despite the fact that the CitiStat program is two years old, there's still lots to be done. Officials aren't used to recording and looking at data, and it's clear, few even had any notion that data could be useful. However, I am glad that they're making the effort -- even if all of the data is stored on a bunch of inconsistent Excel spreadsheets :P.

Copyright © 2007-2014 FlowingData. All rights reserved. Hosted by Linode.