Minority Report Physical Interface in Real Life – Oblong g-speak

Remember the awesome interface in Minority Report? You know, the one where Tom Cruise is sifting through files and information as if he were directing a symphony? Oblong, whose co-founder served as science adviser on the Steven Spielberg movie, created something a lot like it. It’s called g-speak.

Oblong Industries is the developer of the g-speak spatial operating environment.

The SOE’s combination of gestural i/o, recombinant networking, and real-world pixels brings the first major step in computer interface since 1984; starting today, g-speak will fundamentally change the way people use machines at work, in the living room, in conference rooms, in vehicles. The g-speak platform is a complete application development and execution environment that redresses the dire constriction of human intent imposed by traditional GUIs. Its idiom of spatial immediacy and information responsive to real-world geometry enables a necessary new kind of work: data-intensive, embodied, real-time, predicated on universal human expertise.

Here’s the impressive demo reel:

Now here’s the Minority Report clip for comparison’s sake:

Of course g-speak is still in development and has a lot of work ahead before it’s useful to explore “massive datasets” but it’s a good first step nevertheless. Plus, it just looks fun to play with. I wonder what it’d do if I gave it an obscene gesture.

[via Data Mining and Engadget]

8 Comments

  • Kevin Carlson November 17, 2008 at 3:59 pm

    Might respond with something like “I’m afraid I can’t do that, Dave”… ;)

    Seriously, though, gestural and voice recognition seem like the logical progression of UI, especially as mobile device screens get more compact. My pocket camera has “face recognition” built in. If “hand recognition” can eliminate the need for special gloves, then the next step might be sign language processing, converting ASL gestures to synthesized speech for deaf persons…

    (I felt that Cruise’s grand arm gestures were somewhat impractical, leading to sore arms in short order for many people. As devices become more portable, subtle gestures, even facial expression recognition may be feasible…)

  • hehe, well yeah, the big movements were clearly for show. There was also this exhibit, iPoint, at Wired NextFest:

    http://www.wirednextfest.com/inform/2008/exhibits/ipoint.php

    It wasn’t as sexy, but uses the same principle of hand recognition.

  • I can’t help but compare my recent foray into smartphone territory to this. I tried the G1 phone, and found it technologically fascinating, with many nice touches, but practically pointless. I then opted for a dumber smartphone that perfectly fits what I need, has those features I really need (rather than those I presumed I needed due to hype surrounding the iPhone, G1, etc.).

    This sort of UI, and Microsoft’s virtual desktop demos of recent years, are similarly fascinating; but the energy and context requirements are so taxing and specific I don’t anticipate them finding wide purchase in userland for some time, if ever. This might be “Who would ever need more than 640K of memory?” but is there really so much improvement to be made to the simple push of a button?

  • Thanks for the nice youtube video.