The unassuming little Descriptive Camera made me rethink data. This project by Matt Richardson was on display at the ITP Spring Show. The basic premise is that you take a photo and the camera spits out a textual description of what it sees. The results are remarkably accurate, detailed, and humorous.
Here’s what my photo said:
A woman wearing a seriously awesome jacket that is printed with yellow, blue, and grey circles looks at her ipad rather than making eye contact with Matt Richardson.
I mean, my jacket *IS* seriously awesome! So it not only described what it saw, but it also has great fashion sense. What a clever programmer you may be thinking.
Ah, but it’s all a ruse. Albeit, a very novel and sly ruse. Matt described being underwhelmed by the EXIF data provided by digital cameras which provides you with things like date, time, camera model, and sometimes geo-spatial info. He wanted to see a world where cameras actually told you about the contents of the photo. Undeterred by the fact that this type of technology isn’t feasible or practical right now, Matt decided to take a more human approach. He uses Amazon’s Mechanical Turk and alternatively, instant messages to his friends, to subvert the computational task of providing a textual description of the photo.
So back to how this made me rethink data. It struck me that sometimes it’s not what’s immediately in front of you. Sometimes it’s the shadow of the thing that’s important; sometimes it’s what envelopes it, or connects it to its surroundings, or maybe even a subjective description of what it is. Sometimes it’s not a jacket… it’s a seriously awesome jacket.