From the Winnipeg Sun. Something isn't right here. [via]
From the Winnipeg Sun. Something isn't right here. [via]
I knew things were bad, but I didn't know they were this bad. Obama has his work cut out for him. [Thanks, @adamsinger]
Fox News tried to show the change in the top tax rate if the Bush tax cuts expire, so they showed the rate now and what'd it be in 2013. Wow, it'll be around five times higher. Wait. No.
The value axis starts at 34 percent instead of zero, which you don't do with bar charts, because length is the visual cue. That is to say, when you look at this chart, you compare how high each bar is. Fox News might as well have started the vertical axis at 34.9 percent. That would've been more dramatic.
Here's what the bar chart is supposed to look like:
With a difference of 4.6 percentage points, the change doesn't look so crazy.
[via Effective Graphs]
The Mitt Romney campaign put this venn diagram up a few days ago, aiming to show the "promise gap." On the left is an Obama promise, and on the right is the result. In the middle, the combination of the promise and the result, is the gap. Wait, that's not right.
A team from Imperial College found that in 2009-10, nearly 20,000 adults were coded as having attended paediatric outpatient services, and 3,000 patients under 19 were apparently treated in geriatric clinics. Even more striking, between 15,000 and 20,000 men have been admitted to obstetric wards each year since 2003, and almost 10,000 to gynaecology wards.
It's hard to put your faith in analysis, visualization, policy, and anything else that comes out of data with reports like these. With human error being a known issue, we have to find better ways of inputting and double-checking data. Unfortunate mistakes at the outset only lead to bigger problems down the line.
From Gizmodo, this shows battery size in the new iPad versus that of the iPad 2. The battery in the former is 70 percent bigger than that of the latter. Something's not right here.
Charts and graphs are great, because they can let you see a pattern that you might not see in a spreadsheet, but they only work when you use the actual data. Fox News isn't doing themselves any favors by putting up this chart. It shows the recently announced drop in unemployment rate to 8.6 percent as a non-change.
The argument behind this graph in The Wall Street Journal is that the middle class has most of the money and ties into a larger argument about who should be taxed what. There is after all a spike in the middle. Is that really the case though? Sound off in the comments.
(Cheat sheet: Jonathan Chait explains what's going on and Kevin Drum improves the graph to show more truth, although his graph can be improved, too. Grab the data here [Excel spreadsheet] from the IRS, and give it a go.)
President Barack Obama delivered his State of the Union address yesterday, and this year it was "enhanced" by charts and graphs. Basically, as Obama spoke, graphics that you could equate to Powerpoint slides showed up on the side. What'd you think of the enhancement? Did it add or detract from the message? Were the graphics used honestly and effectively?
One thing's for sure: there's something wrong with that bubble chart. Uh oh.
I was going to post this graphic from Good when it came out, but decided not to. I made the same mistake when I first started out. It was another case of wrongly sized bubbles. But they fixed the problem, so now we can see what a big difference it makes. Continue Reading
On Friday, Michael D. Smith, dean of the Harvard faculty of arts and sciences, issued a letter to the faculty confirming the inquiry and saying the eight instances of scientific misconduct involved problems of “data acquisition, data analysis, data retention, and the reporting of research methodologies and results.” No further details were given.
This is why we don't just accept any old data and why we care about the methodology behind the numbers. Stuff like this always reminds me of an exam question that asked us to investigate the data from an article in a prominent scientific journal. The analysis was all wrong.
Sometimes data is wrong out of ignorance. Other times it's wrong because people make stuff up. I can understand the former, but why you would ever do the latter is beyond me.
Update: More details on what happened from research assistants' point of view on the Chronicle. [thx, Winawer]
No clue where this is from, but something seems sort of off, no? I guess we should take the title literally. By the numbers... only.
I'm going to give the benefit of the doubt though, and assume this was just an honest mistake. Here's my guess about what happened. A deadline was coming up quick, and a graphics editor put this together to get a feel for what the final design would look like. He then saved it as a different file, and then went to work. Except when it came time to send the file to the printers, the editor sent the wrong file. Actually, now that I think about it, I'm surprised this doesn't happen more often.
What? I don't see anything wrong with it.
This graphic on the history and future of information has been making the rounds. Several people sent it to me a while back, but it didn't seem quite right, so I didn't post it; however, this post from PZ Meyers compelled me to take another look. Meyers says:
Some days, I think other people must be aliens. Or I must be. For instance, there's a lot of noise right now about this article analyzing the future of information and media that, if you read the comments, you will discover that people are praising to an astonishing degree. I looked at it and saw this graph [above graphic]. And my bullshit detector went insane. It's supposed to be saying something about where people are and will be getting their information, but there's no information about where this information came from, and it's meaningless!
Yikes. Take out the boxing gloves. Looks like we've got another clash between the technical and the design-ish and mainstream crowds. The comments from both sides are also pretty interesting with one group saying how visually appealing and informative the graphic is with the other group criticizing the graphic for failing in every way.
Clearly the graphic is not based on any real data or metric. It goes off history and probably a lot of Wikipedia entries, and then shapes and sizes go off feeling. So as an analytical graph, it doesn't work. But what about as an opinion in graph form? Does it work then? What do you think? Is this graphic a crime against all that is good in visualization or does it work for what it was trying to do?
If there's anything good that has come out of America's financial crisis, it's the interesting and high-quality infographics. This isn't one of them. Below is an ill-conceived bubble chart from BillShrink that "shows" average U.S. consumer spending. Notice anything wrong with it?
Bar versus bubble debate aside, there is a ton of room for improvement as well as huge need for some fact-checking and common sense. For a blog on a site for personal finance, the graphic is, well, not something to be proud of. FlowingData readers know that I like to stay away from heavy-handed critique on what works and what doesn't (I leave that to you guys), but this BillShrink graphic is just so clearly confusing that it's worth pointing out what doesn't work so we can learn from others' mistakes. Can you find the flaws?
I know next to nothing about the economy, stocks, and investments, but I do know a little bit about charts and graphs. The above area circles were prepared by someone at JP Morgan. I don't know, you might have heard of 'em. The circles are based on data from Bloomberg and meant to show the change in market value from 2007 to 2009. The problem here is that the creator sized circles by diameter instead of area, so the difference looks ginormous. I mean, the value change is significant but not that big.
This video shows statistics centered around atheism, claiming that atheism is correlated with a healthy society. I don't want to turn this into a religious debate, but I really don't like these types of videos, slide shows, etc. It's not the ideas that bother me, but because some people think it's a great idea to rattle off a bunch of numbers to "prove" a point. Nevermind the biases, invalid studies, poor analysis, cruddy data, and "results" taken out of context.
What do you think? Do you buy this stuff?
Peter Donnelly talks about the misuse of statistics in his TED talk a couple of years back. The first 2/3 of the talk is an introduction to probability and its role in genetics, which admittedly, didn't get much of my interest. The last third, however, gets a lot more interesting.
Donnelly talks about a British woman who was wrongly convicted largely in part because of a misuse of statistics. A so-called expert cited how improbable it would be for two children to die of sudden infant death syndrome, but it turns out that "expert" was making incorrect assumptions about the data. This doesn't surprise me since it happens all the time.
People misuse statistics every day (intentionally and unintentionally), and oftentimes it doesn't hurt much (which doesn't make it any better), but in this case improper use directly affected someone's life in a very big way. One of the most common assumptions I see is that every observation is independent, which often is not the case. As a simple example, if it's raining today, does that change the probability that it will rain tomorrow? What it didn't rain today?
In other words, the next time you're thinking of making up or tweaking data, don't; and the next time you need to analyze some data but aren't sure how, ask for some help. Statisticians are nice and oh so awesome.
Here's Donnelly's talk:
On Last.fm, someone took snapshots of some Linkin Park songs, compared them, and concluded that all Linkin Park songs
look are the same. I guess at a glance, the songs might appear the same because of the dark chunk towards middle left, but it kind of stops there. Sure, there's some loud to soft and soft to loud alternation, but who likes songs who are loud (or soft) throughout?
The beginning of the post:
Each image above shows the audio level in (roughly) the first 90 seconds of a Linkin Park song. The tempo has been adjusted for a few tracks for better visual alignment.
Wait a minute. The tempo was adjusted for better visual alignment? If you're adjusting the tempo, then really, all songs can be made to look the same. On top of that, we don't know the x-axis or y-axis units. Finally, there's a lot more to a song other than dynamics -- such as key, tempo, rhythm, and lyrics.
I saw this map of the average snow levels in Buffalo. I think I just glanced at it and that was about it. When you first look at the map, what do you make of the colors? When I see green for snow levels, I think no snow. Am I crazy? What do you think?
So the image was kind of in my head all this summer while I was in NYC. When I told people that I was going back to Buffalo after my internship, they always gave this look that said, "Ha, have fun during the winter," and then they would actually say it and then go into how they measure the snow level by comparing it against a giant pole.