Translating images to words

With Google’s image search, the results kind of exist in isolation. There isn’t a ton of context until you click through to see how an image is placed among words. So, researchers at Google are trying an approach similar to how they translate languages to automatically create captions for the images.

Now Oriol Vinyals and pals at Google are using a similar approach to translate images into words. Their technique is to use a neural network to study a dataset of 100,000 images and their captions and so learn how to classify the content of images.

But instead of producing a set of words that describe the image, their algorithm produces a vector that represents the relationship between the words. This vector can then be plugged into Google’s existing translation algorithm to produce a caption in English, or indeed in any other language. In effect, Google’s machine learning approach has learnt to “translate” images into words.