How did you interact with your gadgets 5 years ago? Chances are, with a Mouse and keyboard. Maybe some buttons, or a trackpad here and there. But how about today? Probably with a mouse and keyboard, still, but we'll bet that's not all. Does your phone have a touchscreen? Likely. Can it do voice recognition, too? Bet it can. Do you have a Wii or a PS3? Then you’d better add motion control to the list.
Our point is that the way we interface with our gadgets has changed tremendously over the last couple years, and it’s only going to get crazier from here. To help you prepare, we’ve put together a list of 10 of the future interface technologies we’re most excited about. Read on to find out which ones made the list!
Neural based interfaces can be seen in any Sci-Fi movie over the past five decades, but many companies have actually made real head-way in developing a cohesive bond between our human and electronic brains. On the forefront is the Intendix Brain-Computer Interface . Developed primarily to help patients suffering from crippling disabilities, this neural computer is a tremendous step in strengthening the technological bond between man and machine. By wearing a EEG (Electroencephalography) cap, your mind can interface with the computer and create text (at world record speeds by the way) merely by spelling them in your head. After words are thought out, you can command the computer to speak the written text back, print it, or e-mail the message out. All of these commands are done merely by thinking them, as the EEG cap allows the computer to analyze shifting brain waves and EEG patterns.
Tech demos have demonstrated the tremendous spelling capabilities of the device, and have also taken things a step further, including a real- time demo where a user is able to navigate through a 3D environment using only thoughts. Intendix claims that it’ll take the average person between 10-15 minutes to become accustomed to the interface; a relatively short period of time given the complex nature of the tech. $12,250 will get you this mental mash-up, after Intendix confirms a release date later in the year.
If you’ve somehow managed to avoid the AR hype thus far, augmented reality is, in a nutshell, visual data overlayed on the real world. In concepts and in sci-fi, this frequently takes the form of a pair of glasses or even contacts that allow you to see hologram-like data in the real world. In practice, augmented reality has been available using smartphones equipped with video cameras, compasses and gyroscopes. This means that unlike a lot of the entries in this article, augmented reality is something you can try out right now. Provided, that is, that you’ve got a iPhone or an Android smartphone.
The most ambitious augmented reality application currently available to consumers is Layar , an AR browser for Android and the iPhone. Performance is still a bit on the sketchy side, but it does a passible job of adding various “layers” (fast food, houses for sale, tweets, etc.) to the real world as viewed through your phone camera.
Alright, we know what you’re thinking: Isn’t this “Interfaces of the future,” and not “Interfaces of 1992?” Believe it or not, some people are still waving the VR flag. Chief among them is Virtusphere, which Maximum PC recently saw demoed at GDC 2010. The Virtusphere is essentially a giant hamster ball; a 10-foot tall hollow sphere that you run around inside. It’s propped up on rollers which—like a trackball—feed rotational information back to the computer. Combined with a traditional VR headset and a motion-sensing gun peripheral, the Virtusphere lets you actually run around in a virtual environment.
We’ve still got our doubts about whether or not the Virtusphere is the future of virtual reality. First and foremost, it turns out that a 10-foot-tall hollow sphere has a not-inconsiderable amount of inertia, meaning that if you’re running in one direction and decide to quickly change and run in another direction, the sphere’s going to keep spinning in the first direction.
The Nintendo Wii changed gaming forever in 2008, rocketing past Microsoft and Sony as console king with the introduction of the Wiimote, two motion sending wands that corresponded to hand movements from the player. Taking a more user friendly approach to the console wars proved to be a profitable choice for Nintendo, as many casual gamers opted for quick and easy bowling or boxing matches over steep learning curves on more complex consoles. Microsoft is now looking to push the envelope a step further with Project Natal , a new type of device that uses cameras to track your movements, essentially making ‘you’ the controller. The cameras are able to sense movement using infrared technology; invisible beams that keep the player illuminated in order to keep the cameras focused, regardless of ambient lighting conditions.
This concept may lead to a plethora of new and exciting possibilities, but could also turn out to be a recipe for disaster. Early hands-on tests have been a mixed bag of opinions, though many seem to agree that certain games take advantage of the tech more than others, which is to be expected. What will make or break Natal’s breakthrough as a serious contender will be the responsiveness of the interface. Developers have tried, with varying degrees of success, to build similar, motion-based interfaces. Many, however, like Activision’s recent and ill-fated Tony Hawk’s Ride, simply didn’t work as a result of poor responsive interactivity between the hardware and the player. All in all, it’s way too soon to speculate whether Natal will be a hit, a miss, or a floater somewhere in gaming hardware Purgatory. It’ll be up to the software developers to create innovative launch titles that’ll fit Natal’s new platform accordingly, and draw casual gamers towards a new and different way of gaming that is both innovative and user friendly.
As any iPhone user will tell you, touch screens have come a long ways. Breaking away from mobile devices, hardware manufacturers like Gateway and HP have integrated touch screens into their full size desktops and laptops, with varying degrees of success. Though the screens are responsive and have some handy, touch-based abilities, nothing really sets them apart from your standard, point and click affairs. Enter Jeff Han, a mathematician and scientist at NYU’s Courant Institute of (surprise) Mathematical Sciences. Han debuted his multi-touch, multi-user screen interface at TED 2006 (link to his demonstration), blowing people away with an interface that was fresh, intuitive and extremely nice on the eyes. With new found fame and hundreds of thousands of fans on YouTube wondering what was next, Han launched Perceptive Pixel , a New York based company dedicated to commercializing and spreading his multi-touch workstations across a massive range of industries. Since then, Hans interface has been used for medical research, geographical studies, digital content and creation, industrial design, and was even utilized by CNN to plot geographic votes during the 2008 Presidential election. Sure, we’ve got touch screen capabilities on nearly every phone and computer nowadays, but we doubt you’ve ever seen it used in such a dynamic and impressive way. Don’t believe us? Check this out.
Want to get started on your own touch screen interface? It's possible, and we'll walk you through the steps to make your very own touch screen computer, here .
Canesta powered webcams will allow users to ‘site enable’ their electronic devices, syncing gestures and movements to create a streamlined human-machine interface that could theoretically eliminate the need for remotes or game controllers. Each Canesta webcam has a pre-defined set of pixels, each pixel independently capable of gauging distance between the user and the camera, working to create a 3d image of the user that moves in real time. Put simply? A 3D image (instead of a 2D image) narrows the camera’s focus, helping it focus primarily on specific user gestures rather than background objects that could otherwise interfere. As a result, Canesta webcams can see and understand hand gestures and body movements, leaving control, quite literally, in the palms of your hands.
So, rather than digging under couch cushions for the remote, your TV could be turned on simply by waving at it. Browsing channels could be done with a flick of the wrist. And, due to a recent collaboration with Cool Iris, thumbing through your channels has never looked so sleek. Turn the volume up by turning an imaginary knob, or stop what you’re watching by pressing your palm to the air. Access an entire library of movies through hand gestures, thumbing through files then enlarging them, all in real time. Though most of Canestas gesture based systems are still in the test phases, the concepts and potential integration into other wireless devices make it a promising device to keep an eye on.
Researchers from Microsoft recently filed a patent for muscle-based control schemes, allowing users to control a computer or game console without the use of a controller, mouse or keyboard. This may seem very similar to Natal, and in a way, it is. Both projects aim to rid the user of any hand or finger based input devices, but do so in a different way. Where Natal intends on doing so using advanced cameras that track movement, the muscle-based interface (which doesn’t seem to have an official title by the way) will track movements by using an EMG device that decodes muscle signals from the surface of the skin. In order to do so, the user must first sync with a ‘gesture recognizer’ with collects muscle data on the upper forearm. After collecting the users’ data, the MBI can then read and translate hand movements in various scenarios. Early tech demos show users rockin’ away at Guitar Hero without a guitar in hand; a dream come true for air guitar aficionados. Researchers hope that by further studying hand gestures they will be able to incorporate the MBI into practical, every day uses, including changing an MP3 while running with a quick hand motion, or popping your cars’ trunk without having to put down handfuls of groceries.
Next up, see what we're most excited about!
Tablet computers are the next big thing, as every major hardware developer is scrambling to create the most versatile and sleek competitor for Apples iPad. The most impressive tablet based interface so far, however, goes to Microsoft’s Courier Digital Journal , a 5X7 inch e-book that’s less than in inch thick. The tablet, which flips open like a book, features two screens on each respective side and is governed by a pen-based, drag and drop system. The inclusion of the pen as a tool helps set this tablet apart from HP and Samsungs entries; the footage we’ve seen thus far hasn’t shown a keyboard display of any kind. Messages, emails, notes and web addresses are written by hand and organized in a customizable manner, much like a real journal. The pen based interface also allows for some nifty drawing and sketching capabilities as well; tech demos showed users pulling images of shoes from a website then radically altering their colors and shapes with a couple of quick sketches. The Courier is expected to have a built-in camera and head phone jack for media play back, but these details have not been officially confirmed. In fact, Microsoft has remained tight-lipped about a lot of factors, including the price and release date, offering only a vague Q3/Q4 estimate. As it stands now, however, the Courier Digital Journal seems like it has the ingenuity to stand as a worthy opponent against Apples tyrannical rule of the tablet world.
Microsoft and Carnegie Mellon University have recently unveiled ‘ Skinput ’, a user interface that transfers a digital readout on to different parts of the body. This is done using ‘Bio Acoustic Censors’, a system that can sense impact from finger tips and translate them into acoustic symbols and waves. You see, every time you touch a certain part of, say, your forearm, different parts of your forearm will project different acoustic waves, based solely on bone and muscle lay out. Skinput censors, in conjunction with a special purpose, bio acoustic arm band, can track where you touch and classify the type of impact. Hypothetically this could be useful for a number of tasks; you could map different parts of your arm to pause, rewind, skip, or play an MP3, without having to physically press any buttons. The potential for a truly ground-breaking interface, however, comes with the addition of a miniature projector.
A miniature projector would allow Skinput to simulate touch screen menu projections, allowing the user to tap different parts of the arm to visually navigate a series of menus. As miniature projectors rapidly become smaller and more portable, navigating through an entire interface on the surface of your arm could be closer than we may think.
Quite possibly the most impressive and technologically innovative device we’ve seen this year, Pranav Mistrys ‘ Sixth Sense ’ device is hard to consolidate into one word, much less one sentence. The device is split into three parts: a pocket projector, a mirror, and a camera, all of which are coupled into a small, wearable pendant. Mistry’s ultimate goal is to take the features and internet connectivity of so many mobile devices and shrink it into a projection based interface that could be used anywhere, on any surface.
The software is programmed to trace hand gestures, which communicate with the interface via colored markers on the tips of the users fingers. These finger tip controls, known as fiducials, act as an interactive instrument between the user and the projection, no matter what surface. The built in software is programmed to recognize various shapes and gestures communicated by the fiducials; drawing the ‘@’ symbol connects the user to web mail, drawing a magnifying glass will take the user to a map of his or her current location. Touching the fiducials together in a square motion instructs them to act as a camera, so photos can be taken on the fly, wherever you are, then browsed through and enlarged on any nearby wall. You can even draw a circle on your wrist to project an analog watch.
Early tech demos showed some far more sophisticated uses for the device. Using his fiducials, Mistry was able to grab a picture from a piece of paper (yes, literally grab a flat image, watch the video) and transfer it onto the projection with a quick and easy pinch of the fingers. In a much more extreme case, Mistry projected a racing game onto an ordinary piece of paper, and steered the driver by tilting it.