The Pyro Board is a matrix of 2,500 flames that have controllable intensity, which can be used as an audio visualizer. Yeah, really. Just watch the video below.
[via Colossal]
The Pyro Board is a matrix of 2,500 flames that have controllable intensity, which can be used as an audio visualizer. Yeah, really. Just watch the video below.
[via Colossal]
For the past couple of sports seasons, Facebook mapped the most liked team by county. They did it for football (NFL), the NCAA basketball tournament, and baseball (MLB). Although generalized, the maps provide a view of sports fandom and people clusters across the country, and plus you know, they’re fun.
The Upshot used the same like data, provided by Facebook, and mapped it at the ZIP code level. Then they took it a step further and looked closer at regional rivalries, such as Cubs and White Sox, Yankees and Red Sox, and Dodgers and Angels. Be sure to scroll down to Mets versus Phillies. They incorporated a tidbit of Josh Katz’s dialect map.
The Upshot is off to an impressive start. It’s almost as if The New York Times people have been doing this for a while. [via @KevinQ]
The New York Times released PourOver, a library that lets you do database-like things client-side, so that (1) you, the developer, can worry less about database optimization and server loads and (2) users get a more responsive, faster experience.
PourOver is built around the ideal of simple queries that can be arbitrarily composed with each other, without having to recalculate their results. You can union, intersect, and difference queries. PourOver will remember how your queries were constructed and can smartly update them when items are added or modified. You also get useful features like collections that buffer their information periodically, views that page and cache, fast sorting, and much, much more.
Also: How great is it that The New York Times is now getting into the habit of releasing code?
Remember those running maps I made with limited data from RunKeeper? Strava, which also provides an app to track your runs and bike rides, has a much more expansive version of popular paths around the world. Their dataset includes over 77 million rides and 19 million runs, summing to about 220 billion data points. Just pan and zoom to your area of interest, and there you go.
I combed through personal data that I’ve actively and passively collected since early graduate school to see how life is different now with a 6-month old.
We heard a little bit about The Upshot last month. Now we get to see it. From editor David Leonhardt on what the site is about:
One of our highest priorities will be unearthing data sets — and analyzing existing ones — in ways that illuminate and explain the news. Our first day of material, both political and economic, should give you a sense of what we hope to do with data. As with our written articles, we aspire to present our data in the clearest, most engaging way possible. A graphic can often accomplish that goal better than prose. Luckily, we work alongside The Times’s graphics department, some of the most talented data-visualization specialists in the country. It’s no accident that the same people who created the interactive dialect quiz, the deficit puzzle and the rent-vs-buy calculator will be working on The Upshot.
Hey I’m on board with any site where Amanda Cox introduces statistical models.
FiveThirtyEight is still evolving, and I suspect The UpShot to do the same, so it should be fun to see which way each goes (and what other sites come out of it). For now though, I’m just happy that we get to see this statistics-ish thing happen.
Movoto mapped music preference for various genres, across the United States.
We calculated musical taste scores using data from the National Endowment of the Arts, the Bureau of Labor Statistics, and the U.S. Bureau of Economic Analysis (via the Martin Prosperity Institute) and state level music preferences from Wikipedia. The scores include music genre preference survey data and genre performer concentrations by metro, weighted by that metro’s influence on the music scene. We took the scores for each metro and used a spatial statistics method called nearest neighbors to create the heatmap.
Matthew Klein for Bloomberg View explored mortality in America through a slidedeck of charts. The animations in between each slide grows tedious, but the topics covered, going beyond just national mortality rate, are worth browsing. (Although, can someone tell me why the female mortality rate rose between the 1970s and 2000? I know there’s a perfectly valid reason behind the trend, but I can’t remember.)
The data itself is also worth your time, in case you’re looking for a side project. It comes from the Centers for Disease Control and Prevention and spans 1968 through 2010.
I can tell you from experience the data query process isn’t the smoothest experience — as much as you can expect from a government site, I guess. That said, the amount of data, with a variety of demographic breakdowns and categorizations, can make for plenty of worthwhile projects. Highly recommended.
We’ve seen the map of where everyone lives. Now here’s the reverse of that by Nik Freeman: where nobody lives in the United States.
A Block is the smallest area unit used by the U.S. Census Bureau for tabulating statistics. As of the 2010 census, the United States consists of 11,078,300 Census Blocks. Of them, 4,871,270 blocks totaling 4.61 million square kilometers were reported to have no population living inside them. Despite having a population of more than 310 million people, 47 percent of the USA remains unoccupied.
See also Stephen Von Worley’s map from a couple years ago, which shows blocks in the US with only one person per square mile.
Sebastian Raschka offers a step-by-step tutorial for a principal component analysis in Python.
The main purposes of a principal component analysis are the analysis of data to identify patterns and finding patterns to reduce the dimensions of the dataset with minimal loss of information.
Here, our desired outcome of the principal component analysis is to project a feature space (our dataset consisting of n x d-dimensional samples) onto a smaller subspace that represents our data “well”. A possible application would be a pattern classification task, where we want to reduce the computational costs and the error of parameter estimation by reducing the number of dimensions of our feature space by extracting a subspace that describes our data “best”.
That is, imagine you have a dataset with a lot of variables, some of them important and some of them not so much. A PCA helps you identify which is which, so the source doesn’t seem so unwieldy or to reduce overhead.
As a lesson on conditional probability for himself, Walt Hickey watched 403 episodes of “The Joy of Painting” with Bob Ross, tagged them with keywords on what Ross painted, and examined Ross’s tendencies.
I analyzed the data to find out exactly what Ross, who died in 1995, painted for more than a decade on TV. The top-line results are to be expected — wouldn’t you know, he did paint a bunch of mountains, trees and lakes! — but then I put some numbers to Ross’s classic figures of speech. He didn’t paint oaks or spruces, he painted “happy trees.” He favored “almighty mountains” to peaks. Once he’d painted one tree, he didn’t paint another — he painted a “friend.”
Other findings include cumulus and cirrus cloud breakdowns, hill frequency, and Steve Ross (son of Bob Ross) patterns.
This chart-map-looking thing from Nightly News is making the rounds, and it’s not good. I’m opening the comments below for critique so that you can release your angst. Signed copy of Data Points goes to a randomly selected commenter the end of this week. Have at it.
Earthquakes are in the news a lot lately. A quick search shows a 7.6 off the coast of the Solomon Islands, a 6.6 in Nicaragua, and a 7.1 off the southwest coast of Papua New Guinea, and this was just last week. Not good news at all, but just how common are these earthquakes? Can we look back farther? Yes. In addition to a real-time feed of earthquakes, the United States Geological Survey maintains an ever growing archive of earthquakes detected around the world, and they make it easy to query and download.
Read More
This year’s polar vortex churned up some global warming skeptics, but as we know, it’s more useful to look at trends over significant spans of time than isolated events. And, when you do look at a trend, it’s useful to have a proper baseline to compare against.
To this end, Enigma.io compared warm weather anomalies against cold weather anomalies, from 1964 to 2013. That is, they counted the number of days per year that were warmer than expected and the days it was colder than expected.
An animated map leads the post, but the meat is in the time series. There’s a clear trend towards more warm.
Since 1964, the proportion of warm and strong warm anomalies has risen from about 42% of the total to almost 67% of the total – an average increase of 0.5% per year. This trend, fitted with a generalized linear model, accounts for 40% of the year-to-year variation in warm versus cold anomalies, and is highly significant with a p-value approaching 0.0. Though we remain cautious about making predictions based on this model, it suggests that this yearly proportion of warm anomalies will regularly fall above 70% in the 2030’s.
Explore in full or download the data and analyze yourself. Nice work. [Thanks, Dan]
Pornhub continues their analysis of porn viewing demographics in their latest comparison of pageviews per capita between red and blue states (SFW for most, I think). The main question: Who watches more?
Assuming the porn consumption per capita is normally distributed for each state and that different states have independent distribution of porn consumption per capita, we can say with 99% confidence the hypothesis that the per capita porn consumption of democratic states is higher than the republican states.
Okay, the result statement sounds a little weird, but when you look at the rates, the conclusion seems clear. The states with the highest viewing per capita is shown above, and for some reason Kansas is significantly higher than everyone else. Way to go.
For a clearer view, Christopher Ingraham charted the same data but incorporated the percent of Obama voters for each state. Interpret as you wish:
Again, note Kansas high on the vertical axis.
Update: Be sure to read this critique for a better picture of what you see here.
Looking for a job in data science, visualization, or statistics? There are openings on the board.
Digital Designer, Editorial Content for Bauer Media in Central London.
Research and Data Visualization Associate for National Journal in Washington, DC.
Visual Journalist for Money.com in New York.
The American Community Survey, an ongoing survey that the Census administers to millions per year, provides detailed information about how Americans live now and decades ago. There are tons of data tables on topics such as housing situations, education, and commute. The natural thing to do is to download the data, take it at face value, and carry on with your analysis or visualization.
However, as is usually the case with data, there’s more to it than that. Paul Overberg, a database editor at USA Today, explains in a practical guide on how to get the most out of the survey data (which can be generalized to other survey results).
Journalists who use ACS a lot have a helpful slogan: “Don’t make a big deal out of small differences.” Journalists have all kinds of old-fashioned tools to deal with this kind of challenge, starting with adverbs: “about,” “nearly,” “almost,” etc. It’s also a good idea to round ACS numbers as a signal to users and to improve readability.
In tables and visualizations, the job is tougher. These introduce ranking and cutpoints, which create potential pitfalls. For tables, it’s often better to avoid rankings and instead create groups—high, middle, low. In visualizations, one workaround is to adapt high-low-close stock charts to show a number and its error margins. Interactive data can provide important details on hover or click.
If you do any kind of data reporting, whatever field it’s in, you should be familiar with most of what Overberg describes. If not, better get your learn on.
Stephen Pettigrew and Reuben Fischer-Baum, for Regressing, compared 11 million brackets on ESPN.com against those of pundits.
To evaluate how much better (or worse) the experts were at predicting this year’s tournament, I considered three criteria: the number of games correctly predicted, the number of points earned for correct picks, and the number of Final Four teams correctly identified. Generally the experts’ brackets were slightly better than the non-expert ones, although the evidence isn’t especially overwhelming. The analysis suggests that next year you’ll have just as good a chance of winning your office pool if you make your own picks as if you follow the experts.
Due to availability, the expert sample size is a small 53, but it does appear the expert brackets are somewhere in the area of the masses. Still too noisy to know for sure though.
If anything, this speaks more to the randomness of the tournament than it does about people knowing what teams to pick. It’s the same reason why my mom, who knows nothing about basketball or any sports for that matter, often comes out ahead in the work pool. The expert picks are just a point of reference.
Open data consultancy Conveyal released Disser, a command-line tool to disaggregate geographic data to show more details. For example, we’ve seen data represented with uniformly distributed dots to represent populations, which is fine for a zoomed out view. However, when you get in close, it can be useful to see distributions more accurately represented.
If the goal of disaggregation is to make a reasonable guess at the data in its pre-aggregated form, we’ve done an okay job. There’s an obvious flaw with this map, though. People aren’t evenly distributed over a block — they’re concentrated into residential buildings.
So Disser combines datasets of different granularity, so that you can see spreads and concentrations that are closer to real life.
As part of the You Are Here project from the MIT Media Lab, an exploration of independent coffee shops in San Francisco:
Independent coffee shops are positive markers of a living community. They function as social spaces, urban offices, and places to see the world go by. Communities are often formed by having spaces in which people can have casual interactions, and local and walkable coffee shops create those conditions, not only in the coffee shop themselves, but on the sidewalks around them. We use maps to know where these coffee shop communities exist and where, by placing new coffee shops, we can help form them.
Each dot is a coffee shop, and the shaded spots around the dot represent the areas nearest each shop. It’s an interesting, more granular contrast to coffee chain geography and provides a better sense of a city’s layout.
See also the same idea applied to Cambridge. I imagine there are more cities to come, as the data is gleaned from the Google Places and Google Distance Matrix APIs.