Jorge Luis Borges wrote this really good fictional short story in 1944 called Funes, the Memorious. It’s about a boy, Funes, who isn’t incredibly bright until one day he falls off his horse and hits his head. After the accident Funes has finds that he suddenly has an amazing memory with which he remembers every single detail of every moment in his life.
His memory is so vivid that at one point he sees a dog, and a moment later the dog seems different. Funes remembers the way each hair stood on the the dog’s back, the direction of the breeze, what direction the dog’s tail was pointed, the perspiration on his own body, where everyone else was, etc. That dog could not possibly be the same dog that he saw a moment ago.
Funes not only remembered every leaf on every tree of every wood, but even every one of the times he had perceived or imagined it. He determined to reduce all of his past experience to some seventy thousand recollections, which he would later define numerically. Two considerations dissuaded him: the thought that the task as interminable and the thought that it was useless.
Trying to Remember Too Much
At this day and age, when so much of everything is stored in database and everything is logged, is it possible to remember too much? Technology has enabled us to surveil others, video tape every moment of our life, store every email, take a seemingly endless river of pictures, record conversations, and log data out the wazoo.
Sure, it’s great to have it, but what use can you make of a year’s worth of data? What about ten years? Or dare I say, a century’s worth of data?
This is when visualization becomes important. It’s our duty to make the ocean of data available without letting the ocean’s never-ending vastness overwhelm the data explorers. Otherwise, our technological memory becomes like that of Funes’, and all is lost. OK, cue the dramatic music… now.
Hi you (I don’t know your name, so can use it yet)
Thanks for the reference to the story, You put forth a good subject. to which I would like to contribute four small notes.
One is that I took a course with a great lecturer (let’s just call him Isaco). In his lecture, Isaco gave a brief explanation of how to use a neural network to “remember” a bunch of images, and then – when given one of these images – but ditorted, the network-machine could spit out the original image.
Then I asked him, what would happen if we put allot more pictures to the network-machine. his reply was very interesting, he said that on some point, the machine would start spitting out combinations of the images you gave her (in reply to some distorted image input). in his words the machine would start to “hallucinate”.
That simple example showed me very clearly, that too much info (on a very simple net) – could just give out bad output.
The second note to make is to mention the great importance some therapists put on forgetting – as means for healing (like richard bundler, from NLP)
The third point to make is of other methods for time managment techniques (like GTD or zen-to-do), which put allot of importence on freeing “mental RAM”, as they call it, in order to be able to cope well in the world.
The fourth thing is I wish to agree with you fully, that statistics had to concern itself with “insights from data†issues. and probably before the need rise to rescale it from a selected few statisticians, to whole populations (of web users, for example).