I thought this was a joke when I first read it, but scientists from Japanâ€™s ATR Computational Neuroscience Laboratories have developed software that can map brain activity to an image. Subjects were shown letters from the word neuron and images were reconstructed and displayed on a computer screen.
A spokesman at ATR Computational Neuroscience Laboratories said: “It was the first time in the world that it was possible to visualise what people see directly from the brain activity.
“By applying this technology, it may become possible to record and replay subjective images that people perceive like dreams.” The scientists, lead by chief researcher Yukiyaso Kamitani, focused on the image recognition procedures in the retina of the human eye.
It is while looking at an object that the eye’s retina is able to recognise an image, which is subsequently converted into electrical signals sent into the brain’s visual cortex.
The research investigated how electrical signals are captured and reconstructed into images, according to the study, which will be published in the US journal Neuron.
I’m not sure how much brain activity from the retina has to do with activity during dreams, but it’s interesting nevertheless (although I am sure – like all interesting science – it is slightly hyped by the media).
[via Telegraph & Pink Tentacle & Chunici]
Straight out of science fiction: http://www.imdb.com/title/tt0101458/. But isn’t the real question, the question that also comes out of the movie, come to whether we should rewatch our dreams? Would this technology be abused more than it’s properly used?
From the sounds of it, I think they’re just saying that the retina converts the vision of the object into electrical signals, which they then studied. I’m assuming the brain generates similar electrical signals during dreams, which can be examined and converted to an image in the same way the brain does.
The last I had heard of something like this, scientists had figured out how to place *very* basic images from a camera into the brain, for people who had lost their sight. The illustrations of what the patient actually “saw” were little more than light or dark points throughout their field of vision. Basically just enough to avoid obstacles, but nowhere near enough to identify them, much less more complex activities such as reading or facial recognition.
In light of that, I’d be very skeptical of the image quality they’re able to get out of this system. At best, I’m guessing it’s good enough to identify types of objects (distinguishing a car from a dumpster, for example), but I doubt it gets much better than that.
Still, it’s certainly an interesting development, and I’m particularly interested to see how this could apply to work being done on memory as well. This type of device could prove incredibly valuable to law enforcement, where sketch artists have to go through a lot of effort to get even rough estimates of suspects’ faces.
here’s what they were showing subjects and the reconstructions: http://www.chunichi.co.jp/article/national/news/images/PK2008121102100002_size0.jpg
re: media hype; is it too much to expect pictures from the results?
i.e., from the source(s).
@Kevin – That part’s ok. I think the reading dreams part of it is still a way’s off though.