The Future of User Interfaces

Feb 11 2010 by Cameron Chapman | 35 Comments

User interfaces—the way we interact with our technologies—have evolved a lot over the years.

From the original punch cards and printouts to monitors, mouses, and keyboards, all the way to the track pad, voice recognition, and interfaces designed to make it easier for the disabled to use computers, interfaces have progressed rapidly within the last few decades.

But there’s still a long way to go and there are many possible directions that future interface designs could take. We’re already seeing some start to crop up and its exciting to think about how they’ll change our lives.

The Future of User Interfaces

In this article are than a dozen potential future user interfaces that we’ll be seeing over the next few years (and some further into the future).

Brain-Computer Interface

What it is: In a brain-computer interface, a computer is controlled purely by thought (or, more accurately, brain waves). There are a few different approaches being pursued, including direct brain implants, full helmets, and headbands that capture and interpret brain waves.

Army Mind-Control Projects

Army Mind-Control ProjectsImage source.

According to an article in Time from September 2008, the American Army is actively pursuing "thought helmets" that could some day lead to secure mind-to-mind communication between soldiers. The goal, according to the article, is a system where entire military systems could be controlled by thought alone.

While this kind of technology is still far off, the fact that the military has awarded a $4 million contract to a team of scientists from the University of California at Irvine, Carnegie Mellon University, and the University of Maryland means that we might be seeing prototypes of these systems within the next decade.

The Matrixesque Brain Interface: MEMS-Based Robotic Probe

The Matrixesque Brain Interface: MEMS-Based Robotic ProbeImage source.

Researchers at Caltech are working on a MEMS-based robotic probe that can implant electrodes into your brain to interface with particular neurons. While it sounds very The Matrix-y, the idea is that it could allow for advanced control of prosthetic limbs or similar body-control.

The software part of the device is complete, though the micro-mechanical part (the part that actually goes into your brain) is still under development.

OCZ’s Neural Impulse Actuator

The Matrixesque Brain Interface: MEMS-Based Robotic ProbeImage source.

The NIA is a headband and controller that incorporates an electro-myogram, an electro-encephalogram, and an electro-oculogram to enable it to translate eye movements, facial muscle movements and brain waves. The most interesting part of the NIA is that it can be set up to work with virtually any game; the controller simply translates input into keystrokes.

Biometric and Cybernetic Interfaces

What it is: In computing, cybernetics most often refers to robotic systems and control and command of those systems. Biometrics, on the other hand, refer to biological markers that every human being (and all life forms) has and that are generally unique to each person. These are most often used for security purposes, such as fingerprint or retina scanners.

Here are a few current biometric and cybernetic interface projects.

Warfighter Physiological Status Monitoring

Warfighter Physiological Status MonitoringImage source.

The Military Operational Medicine Research Program is developing sensors that can be embedded into clothing to monitor soldiers’ physiological well-being. These can be used not only to monitor real-time health, but also to input additional variables into predictive models the military uses to evaluate the likely success of its missions.

Fingerprint Scanners

Warfighter Physiological Status MonitoringImage source.

Fingerprint and hand scanners have been seen in movies for ages as high-tech security devices. And they’ve finally become readily available within the past few years.

In most cases, fingerprint scanners are used to allow or deny access to certain users for a computer system, vehicle, or controlled-access area. Because fingerprints are unique, this is a nearly-foolproof way of determining who is gaining access to something, as well as a way to track who accessed what, and when.

Digital Paper and Digital Glass

What it is: Digital paper is a flexible, reflective type of display that uses no backlighting and simulates real paper quite well. In most cases, digital paper doesn’t require any power except when changing what it’s displaying, resulting in very long battery life in devices that use it. Digital glass, on the other hand, is a transparent display that otherwise resembles a standard LCD monitor.

Transparent OLED Display

Transparent OLED Display

Samsung showcased a prototype of a new, transparent OLED display on a notebook at CES 2010. The display is unlikely to appear in notebooks in its finished form, but it might be used in MP3 players or advertising displays in the future according to the company.

LG 19" Flexible Display

LG 19" Flexible DisplayImage source.

Flexible e-paper displays might replace paper one day. Unlike their rigid counterparts, e-paper can be nearly as flexible as real paper (or card stock, at least), and almost as thin. LG has created a 19" e-paper display that’s flexible and made of metal foil so it will always return to its original shape. Watch for this type of display to become popular for reading newspapers or other large-format content in the future.

E-Ink

E-InkImage source.

E-Ink technology is an interesting technology that has many interesting implications in the packaging and media industries. E-ink is a proprietary paper technology that’s already seen some real-world use (such as on the Esquire cover from October 2008). While it’s currently only available in grayscale, it’s likely to be available in full color before too long.

E-Ink is famous for its integration into many popular eBook readers, including the Kindle, Barnes & Noble’s Nook, and the Sony Reader. And while in these instances it’s put within a rigid display, there’s no reason it can’t be.

Telepresence

What it is: Telepresence consists of remote control of a drone or robot. They’re most commonly seen in the scientific and defense sectors, and vary considerably based on what they’re being used for. In some cases, those controlling the device only get visual input, but in others (such as medical telepresence devices) a more complete simulation is created. Below are some of the best telepresence projects currently underway.

Telepresence Surgery

Minimally invasive surgery can now be conducted via telepresence, using a robot to perform the surgery on the actual patient, while a surgeon controls it remotely. In fact, this method of surgery can actually work better than using long, fulcrum instruments to perform the surgery in person.

The technology combines telerobotics, sensory devices, stereo imaging, and video and telecommunications to give surgeons the full sensory experience of traditional surgery. Surgeons are provided with feedback in real-time, including the pressure they would feel when making an incision in a hands-on surgery.

Universal Control System

Universal Control SystemImage source.

The Universal Control System is a system developed by Raytheon (a defense contractor) for directing aerial military drones. The interface is not unlike a video game, with multiple monitors to give operators a 120º view of what the drone sees.

Raytheon looked at the existing technology drone pilots were using (which consisted of standard computer systems—a nose-mounted camera and a keyboard) and realized that there were betters systems being used by gamers. So they set about developing a drone operation system based on civilian business games (and even hired game developers). The finished system also incorporates augmented reality and other futuristic interface elements.

Space Exploration and Development

Telepresence could be used to allow humans to experience space environments from the safety of earth. This technology could allow people to remotely explore distant planets without having to leave our own planet, and for a lot less than an actual manned mission.

The biggest hurdles to this technology at the moment are delays in communications over long distances, though there are already advances happening in those areas that may make it a non-issue within the next few years.

Augmented Reality

What it is: Augmented reality consists of overlaying data about the real world over real-time images of that world. In current applications, a camera (generally attached to either a computer or cell phone) captures real-time images that are then superimposed with information gathered based on your location.

There are a number of current augmented reality projects in the works. Here are some of the most interesting ones.

Augmented Reality in a Contact Lens

Augmented Reality in a Contact LensImage source.

One of the more interesting current projects with augmented reality consists of a display contained within a contact lens. The conduit between the eye and the brain is much faster than a high-speed internet connection—and the eye can perceive more than we realize— including millions of colors and tiny shifts in lighting.

Because of this, it makes sense that an interface that works directly with your eye would catch on.

The current proofs of concept include contact lenses being developed at the University of Washington. They’re crafting lenses with a built-in LED that can be powered wirelessly with Radio Frequency and other simple electronic circuits.

Eventually, these contact lenses will contain hundreds of tiny LEDs that can display images, words, and other information in front of the eye. It’s likely that these contact lenses will be the display for a separate control unit (such as a smartphone).

Wearable Retinal Display

Wearable Retinal DisplayImage source.

The Universal Translator made communication between species possible in the Star Trek universe. And while a Universal Translator like that is likely a long ways off, NEC is already working on a retinal display called the "Tele Scouter" that will translate foreign languages into subtitles for the wearer.

The device is mounted on frames from eyeglasses and includes both a display and a microphone. The sound is transmitted to a separate device that sends it to a central server for translation, and then the subtitles are sent back to the device and displayed on the retinal display.

The best part is that the text is displayed within the user’s peripheral vision, which means they can keep eye contact with the person they’re speaking with.

Heads-Up Display

Heads-Up DisplayImage source.

There’s one type of augmented reality that’s been around for years, first seen in military applications and then eventually in the commercial airline and automotive industries.

Heads-up displays (HUDs) are used to display data on the windshield of a car or plane without requiring the operator to look away from their surroundings.

In cars, HUDs are helpful at night to driving conditions on the windshield of the vehicle. This allows drivers to keep their attention on the road ahead.

In the future, HUDs will be used for synthetic vision systems. In other words, everything a user sees in their viewport would be constructed from information obtained in a database, rather than an actual real world-view. This type of system is still a long ways off, but could change the way vehicles are designed, and can make for safer aircraft and automobiles because the driver/operator wouldn’t need a direct line-of-sight to their surroundings.

Privacy Concerns with Augmented Reality

Privacy Concerns with Augmented Reality

Of course, privacy specialists will have a field day with Augmented Reality applications. After all, what happens when you can easily look at a person and gain access to their personal information via facial recognition. The technology to do that isn’t too far off. You’ll simply look at a person across a crowded restaurant and their name, Facebook and Twitter accounts, phone number, and any other available information will be at your fingertips.

While this could certainly come in handy (such as those times when you find yourself confronted with someone who seems to know you, but you have no clue who they are), it could also make it near effortless for just about anyone to access your information. In fact, technology like this is already starting to pop up.

Voice Control

What it is: We’ve seen voice control in various sci-fi movies and novels for years. Just like its name implies: this technology relies on voice commands to control a computer. Voice control has been around in some form for a few years now, but its application is just recently being explored. Here are a few current projects.

BMW Voice Control System

Leave it to luxury automaker BMW to develop a new voice control system that allows drivers to control their navigation and entertainment systems. A single voice command lets drivers get directions to their destination or play a specific song.

While other automakers have tried similar voice recognition systems, this one appears to be the most advanced.

Google Voice Search

Google Voice SearchImage source.

If you have a smartphone running Android, you’re probably already familiar with Google’s Voice Search feature. While it’s not foolproof, it’s definitely a great way to look something up without having to spend a minute typing in a complex search term. The best part about Google’s Voice Search is that it’s not just restricted to the Android platform. It will also work with your BlackBerry, iPhone, Windows Mobile phone or Nokia S60.

Voice search is handy if you’re trying to look something up in a hurry or while driving. The Android platform will also interface with navigation, which is handy if you’re behind the wheel.

Gesture Recognition

What it is: With gesture recognition, movements with the hands, feet, or other body parts are interpreted by a computer (often through the use of either a hand-held controller, a camera that captures movement, or some other input device like gloves) as commands.

Gesture recognition’s popularity is due to the video gaming industry, though there are a number of other potential uses.

Acceleglove: Gloves that Recognize Sign Language

Acceleglove: Gloves that Recognize Sign LanguageImage source.

Researchers at the George Washington University have created a glove called the "Acceleglove" that will recognize American Sign Language gestures and translate them into text. It works by using a series of accelerometers on each finger of the glove along with other sensors on the shoulders and elbows to send electrical signals to a microcontroller than finds the correct work associated with the movement. The unit determines signs based on starting hand positioning, intervening movements, and the ending gesture, eliminating phrases at each step along the way. It takes milliseconds for the computer to output the correct word after the sign is completed.

Gesture-Based Control for TVs

Gesture-Based Control for TVsImage source.

Television, because of its simplified user interface, is a perfect candidate for control by gesture recognition. And examples of gesture recognition control for TVs are already available. At the 2009 International Consumer Electronics Show there were a few examples of gesture control for TVs. Panasonic has developed a remote control that has touch screens where finger gestures control various things. But Hitachi has come out with a TV that uses a 3-D depth camera to recognize gestures on a much larger scale. It lets you use hand gestures to change the channel, control the volume, and even to turn the TV on and off.

Nintendo Wii

Nintendo WiiImage source.

The Nintendo Wii’s controller system is probably the first widely-adopted gaming system that uses gesture recognition for at least part of its control method. Of course, the Wii’s gesture recognition system requires that you hold the special Wii Remote and Nunchuk in order to have your gestures recognized, but it’s still a pioneering system within the gaming industry. And in the future, it’s likely that other systems, not just for gaming but in the computer industry in general, will adopt similar control systems.

Xbox Project Natal

Xbox Project Natal

Project Natal takes the Wii’s gesture recognition a step further. No remote or controller is required; users simply interact with what’s on screen as they would in the real world. In other words, to kick a ball, just perform a kick motion. It eliminates the need for controllers and makes gaming more immersive.

Head and Eye Tracking

What it is: Head and eye tracking technology interprets natural eye and head movements to control your technology.

Gran Turismo 5

Gran Turismo 5Image source.

Gran Turismo has long been heralded as one of the most realistic racing games out there. But with Gran Turismo 5, they’ve gone a step further. The newest version of the game will include head tracking capabilities. The PlayStation Eye camera will track a player’s head and control the view within the cockpit of the car. This will make the overall experience much closer to what you actually experience while driving, where you can glance to one side or the other quickly without entirely losing sight of what’s in front of you.

Pseudo-3D with a Generic Webcam

Chris Harrison has come out with a head tracking system that works with a standard webcam. It’s available for Mac OSX (not including 10.6) and can be used with any number of 3D interfaces. The most interesting thing is that this kind of technology can easily be made to work using existing technologies.

Artificial Intelligence

What it is: Artificial Intelligence (AI) consists of creating inorganic systems capable of learning from human input. While we’ve already created systems that are decent at mimicking learning behavior, they’re still limited by their code. Eventually, computers will be able to learn and grow beyond their programming. It’ll only be a matter of time before we see The Skynet Funding Bill passing (it’s already 13 years behind).

Below are some of the more interesting AI projects currently being considered and undertaken.

Cyber Security Knowledge Transfer Network

In the UK, police are looking into how AI can be used to for counter-terrorism surveillance, data mining, masking online identities, and preventing internet fraud. They’re also looking at how intelligent programs could capture useful information and preserve images of hard drives over the web.

Digital forensics could become much more efficient with the assistance of artificial intelligence, so expect to see a lot more projects in the coming years that incorporate AI with law enforcement.

AI for Adaptive Gaming

AI for Adaptive GamingImage source.

Artificial intelligence will create more realistic and engaging gameplay. Rather than relying solely on pre-programmed interactions, AI can allow games to adapt to their player’s mid-game. While there are some technologies being employed that simulate artificial intelligence in video games, true AI hasn’t yet been achieved. Newer technologies, like dynamic scripting, could bring game AI to a new level, leading to more realistic gameplay.

AI for Mission Control

NASA and other world space agencies are actively looking into artificial intelligence for controlling probes that might explore star systems outside our own. Because of delays in radio transmissions, the further away a probe gets, the longer it takes to communicate with it and control it. But AI may eventually make the need for direct control nearly disappear.

These probes would be able to react intelligently to new stimuli, and could carry out more abstract orders rather than having to have every minute movement preprogrammed or transmitted on the go.

Virtual Assistants

Virtual AssistantsImage source.

The need for an assistant to handle the mundane tasks of everyday life is growing greater for many people. However, your current options right now are limited, especially for the majority of us who can’t afford a personal assistant.

But soon, we’ll have virtual assistants available that will be able to make a reservation for us, find a gift for your grandmother’s 75th birthday, or do all the research for your next project.

While the degree of actual AI and just extremely intelligent programming will vary, there are definite potential applications for a true AI system in all this.

Multi-Touch

Multi-TouchImage source.

What it is: Multi-touch is similar to gesture recognition, but requires the use of a touch screen. A traditional touch screen could accept input from only one point on the screen at a time. Multi-touch, on the other hand, can accept input from multiple points simultaneously.

There are already a number of products that include multi-touch, though the technology still has a lot of untapped potential.

Microsoft Surface

Microsoft Surface

Microsoft’s Surface technology is a large-scale multi-touch system that’s particularly suited to being built into things like tables or retail displays. Surface is in use in a variety of places, including Disney’s Tomorrowland resort and during MSNBC’s election coverage.

Because of Surface’s large scale and likely uses, it accepts input not just from multiple fingers at once, but from multiple users at once. This makes it particularly suited to public spaces. In addition to multi-touch, Surface also has object recognition capabilities, which allow users to place physical objects on the screen to trigger different digital responses.

Apple Products

Apple ProductsImage source.

Apple has been a leader in implementing multi-touch technology for a few years now. The iPhone was the first mainstream consumer product to use multi-touch, and it has appeared on the iPod Touch, MacBook track-pad, the Mighty Mouse and soon, the iPad. Multi-touch has become a key part of the Mac OSX user experience, and the iPhone OS. Everything from scrolling to zooming in and out, to custom gestures can be carried out using the multi-touch interface.

Mobile Phones

In addition to the iPhone, a number of other mobile devices have multi-touch capabilities. The Palm Pre and Pixi, the Motorola Droid (though multi-touch is disabled in the U.S.), and the HTC Hero and HD2 all have multi-touch capabilities.

For the most part, these phones use multi-touch for simple tasks like zooming in and out when browsing the web. Usability is greatly improved in most cases because of the inclusion of multi-touch, especially when it comes to manipulating on-screen graphics and images.

What user interface are you most excited about?

What user interface technologies and projects are you eager to see the most? Share your thoughts in the comments.

Related Content

About the Author

Cameron Chapman is a professional web and graphic designer with over 6 years of experience in the industry. She’s also written for numerous blogs such as Smashing Magazine and Mashable. You can find her personal web presence at Cameron Chapman On Writing. If you’d like to connect with her, check her out on Twitter.

35 Comments

Matthew Heidenreich

February 11th, 2010

i’m ready to be the old guy who doesn’t know what all the “kids” are using these days. Everything seems to being moving at such a fast pace, we don’t really know what will be capable in 20 years.

Imon

February 12th, 2010

Great article, the thought control UI seem’s interesting

Simon Carr

February 12th, 2010

Whoa. Some of this technology is probably a few years away, but still interesting… Touch Screen interfaces (like the apple products) and Motion Sensing devices (like the Wii) will probably be the most dominant over the next few years.

There are a lot of arguments against Virtual Reality technology and advancing computer AI too quickly… It is a slippery slope.

Rafael G. Lepper

February 12th, 2010

hello,

I’m very surprised you didn’t include Pranav Mistry’s Sixth Sense technology. It’s really amazing, a great concept developed by a real genious, and the best part is that the hardware for it is quite simple and already here, and the software is going to be open source.

http://www.ted.com/talks/lang/eng/pranav_mistry_the_thrilling_potential_of_sixthsense_technology.html

5 Best Linux compatible media players

February 12th, 2010

It is very important to know the media players of Linux. Because there are not many of them pre installed. So here are some I can recommend from my experience.

10GUI

February 12th, 2010

You should mention 10GUI. Although it’s a concept, its probably the most practical. http://10gui.com/.

Callum Chapman

February 12th, 2010

Great article, Cameron! Really looking forward to these flexible screens – they’re using a similar technology in a new range of (very expensive) HD LCD TVs that are so thin they can be curved. They’re working hard on it at the moment to make them much more affordable at a larger scale, and even how to make them thin enough to be able to purchase affordable ‘digital’ magazines and newspapers with moving pictures…like in Harry Potter! Not any time too soon, though!

If you’re wondering where I heard about this, check out The Gadget Show (http://fwd.five.tv/gadget-show) :)

Saptarshi

February 12th, 2010

Lovely article. The video posted by Google (from TED) of Google Earth running on 8 screens each with it’s separate machine is sure one of the most stunning User Experience I have seen. The video URL:

http://www.youtube.com/watch?v=atV2foTBbyE&feature=player_embedded

Brian

February 12th, 2010

Shouldn’t you forget Belgian based company Softkinetic that actually created the controller-less gesture recognition technology used by Natal…

Mea Poulpa

February 12th, 2010

I think it’s not sci-fi. It’s time to have a real interface between body and metal as in futurist role playing games.

It has yet begun : cybernetics : new interface with tongue to replace eyes http://www.youtube.com/watch?v=xNkw28fz9u0 . We can imagine greater implants like in shadowrun http://en.wikipedia.org/wiki/Shadowrun : night/thermo vision, interfaces with military weapons or with Augmented reality TV emissions/games…
Implants to inject medicines products regularly (I’ve heard you can inject sterility pills for men, available nowadays or in France implants to recognize pets, under their skin),

You can do the same with human beings (advanced biometrics) : http://en.wikipedia.org/wiki/Microchip_implant_%28animal%29

Memories and program for abilities connected on your brain (hope this won’t bug or burn), jack connectors on neck or brain !

Clones & Drones armies (ah no that is in Star Wars !)

Today’s interfaces really are prehistorics !

Michael

February 12th, 2010

Gaming for me – with 3D screens, augmented reality and tracking technologies I can’t wait to see what they can come up with!

Adit Gupta

February 12th, 2010

Although the article is good,it seems much similar to an article published on UX booth – http://www.uxbooth.com/blog/the-future-of-interface-design/

App Sheriff

February 12th, 2010

Can’t wait to see them coming in the future.

Codesquid

February 12th, 2010

I wonder how much longer the traditional mouse and keyboard will have the majority market share of web browsing? Touch screens will likely take a large percentage in the coming years, and who knows what else may happen beyond that.

Jordan Walker

February 12th, 2010

For me, I will drag my feet on those products. I do not want to be so connect that I get stuck.

Matthew Craig

February 12th, 2010

EXCELLENT post. Great read.

draez

February 12th, 2010

you blew it with the last example. iPad…. please go away

FiL

February 12th, 2010

The Mythbusters showed that fingerprint scanners are far from, “nearly-foolproof.”

MEMS Industry Group

February 12th, 2010

Another MEMS-based user input technology currently under development is breath-controlled UI. A company called Zyxio demonstrated its sensawaft technology at CES 2010: http://memsblog.wordpress.com/2010/01/08/mems-at-ces-2010-pico-projectors-low-power-displays-hands-free-user-interface-more/

Mimi Flynn

February 12th, 2010

Don’t forget about the Nintendo Power Glove: “Its so bad.”

Vivek

February 12th, 2010

well i would like to add neural networks and nano technology in this they are one of the biggest future of user interface.
Super computers and robots are as usual

neil young

February 12th, 2010

omg the future is apple then…

plrang

February 12th, 2010

Sorry but NO WIRES allowed, thank God

Eko Setiawan

February 12th, 2010

All this technology is very cool, but we certainly hope the technology will make us become more productive, not to be lazy and rely on all the technology.
Hi, thanks for share :)

Scott Barnes

February 13th, 2010

I’ll just end this post for me as being “Noice!”. I love the future of UX :)

Paulo

February 14th, 2010

Regarding Multi-touch interfaces
The Microsoft Surface device is closer to Gesture Recognition that actual Multi-touch technology. It’s functionality is provided from the five cameras in the table recording user action. http://arstechnica.com/gadgets/news/2007/05/what-lurks-below-microsofts-surface-a-qa-with-microsoft.ars

In that respect, Jeff Han’s FTIR product for multitouch computing is a valid alternative.
http://www.perceptivepixel.com/
http://www.ted.com/talks/jeff_han_demos_his_breakthrough_touchscreen.html

gary maloney

February 14th, 2010

Telepathy or Telekinesis? and When will they finally answer the question can the motor cortex be read at scalp level?

Charlie Bunker

February 17th, 2010

Great article. I’d like to see a voice engine that can recognize any language and translate it to a language you understand.

Chris

February 18th, 2010

“The biggest hurdles to this technology at the moment are delays in communications over long distances, though there are already advances happening in those areas that may make it a non-issue within the next few years.” — Somebody discovered subspace then?

Michiels

March 24th, 2010

Cameron Chapman,

un focus qui fait bien le point sur la question. Twitter : Michielsmarc

Von

July 31st, 2010

Great article

Michael

October 31st, 2010

Awesome, thanks for sharing this article!

Phil

November 11th, 2010

Great article. I would be thrilled to see most of these things in mass market.
My sons will live beautiful things!

Mangirdas

January 19th, 2011

I think the future of WEB content management is a drag and drop interface. Now we are developing a new generation CMS where you can browse your website as a regular user and change all element inline or drag in new stuff.

Bob

June 13th, 2011

You are forgetting about LCARS. Here are some interesting point we can adopt for the future UI:

http://andrewlowndes.wordpress.com/2011/06/02/lcars-ui-of-the-future/

Leave a Comment

Subscribe to the comments on this article.