The Future of UI
Science Fiction has been known to influence the world of technology by inspiring future generations with ideas that were previously inconceivable. Star Trek notably inspired the creation of tablets and flip phones, and films like Minority Report and Iron Man may very well foretell the future of augmented reality interfaces that are touch and gesture sensitive.
Body-technology relationships are already reflected in wearable technology such as the Apple Watch, Fitbit, and music players embedded within clothing. Some technologies accurately measure your quality of sleep, basal body temperature, and some claim to monitor blood sugar non-invasively for Type 1 and Type 2 diabetics.
But most of these technologies still rely on screen based UIs or otherwise sync to screen based UIs. What other forms of human/computer interaction are developing, and what form do they take?
“As we move away from screens, a lot of our interfaces will have to become more automatic, anticipatory, and predictive,”
The term Zero UI was coined by Andy Goodman for the notion of an invisible or screen-less UI. Technology like Amazon Echo, Microsoft Kinect and Nest are all examples of devices that are considered a part of Goodman’s Zero UI. What they all have in common is the movement away from screens or touchscreens and interfacing with technology in more natural ways.
Internet of Things
The concept of the Internet of Things (or IoT) was invented by and term coined by Peter T. Lewis as early as 1985. It refers to the internetworking of physical devices embedded with electronics, software and sensors that enable these ‘smart objects’ to exchange data and be controlled remotely. Within IoT is the opportunity for the integration of the physical world into computer based systems, resulting in improved efficiency and accuracy. There are high hopes for it too. The IoT is expected to usher in an age of automation in nearly all fields as well as smart energy management in a smart grid and smart cities.
Current market examples include the home automation of lighting, heating in the case of smart thermostats, ventilation, air conditioning systems, washers, dryers, vacuums, and refrigerators that use wifi for remote monitoring.
As to how these screenless smart devices communicate with people is still largely trial and error and constantly evolving. Designers experimenting with ambient interfaces are faced with the daunting challenge of creating a universal language with light and sound. There is currently no style guide or best practices for how objects are to communicate with humans using LED lights, colours, patterns or sound. There are some implied standards, for example red or amber lights suggest errors and warnings, while rapidly pulsing lights generally mean that something requires attention. But overall, the field has no established symbols or conventions and is largely creating a language where before there was none.
Haptic Feedback, or just ‘Haptics’, is the use of the sense of touch in interface design to communicate or provide feedback. Already well implemented in mobile phones and watches to provide feedback, it’s also well implemented in the case of gaming where controllers, joysticks, or steering wheels provide feedback by way of resistance or sensation. Meanwhile the Teslasuit promises to provide body wide sensations for gaming purposes.
Similar to Ambient Interface Design, standardization of Haptics for notification and communications purposes is a largely evolving field that currently leaves little room for nuance. The technology as it exists now is quite binary – either you have an alert or you don’t- with all vibrations feeling more or less the same. The range of sensitivity we have in our skin however, and our receptivity to sensation leaves a huge opportunity as a nuanced communication tool. The variety and implication of vibrations are definitely due to be expanded and made more meaningful.
As it currently stands, computers and machines still need us to come to them on their terms and speak their language. The next step is for machines to understand us in our own natural words, behaviours and gestures. Voice controls or VUIs have made significant advancements in the past few years. Previously considered to be Artificial Intelligence, Voice UI’s represent the interdisciplinary product of computer science, linguistics and to some extent psychology. Although people generally have little patience for a machine that can’t understand them, are easily frustrated and quick to ridicule the interface, Voice UI’s are on the rise and becoming more accurate and advanced. Common examples include Siri and Google, and companies that use virtual phone assistants that prompt you with a series of questions (e.g. what your account number is or which department you need, before putting you in a queue to speak with a person). Overall, the technology has improved by leaps and bounds – recognition has improved to an incredible 96 percent accuracy – and the future of Voice UIs is bright.
AI as the new UI
Some believe that computers should do more than merely understand us, but know us and our needs before we do. This predictive technology could redefine how we interact with machines and the world more generally. If UI becomes simplified and invisible, then AI can replace UI and interfaces as we know them would largely disappear. If devices don’t need traditional, manual inputs, but listen and receive input constantly and anticipate our needs before we do, the opportunities are limitless. By telling the UI simply what you need it to do, but leave it to the UI how the task should be done, you are left with something called Minimum Viable Interaction (MVI).
The key to AI’s success is contextual awareness. The machine only knows what we tell it, so providing more input is the only way to continuously improve contextual awareness. Google’s Now On Tap uses machine learning and deep linking to predict what information you might need based on your text messages, location and email, providing anything from yelp reviews, directions, traffic, weather, showtimes and more. As AI advances, the predictive technology could radically transform the way we interact with machines as well as positively impacting our day to day lives with greater efficiency and convenience.
As designers let their imaginations wander, the stranger and wilder UI ideas become, branching as far as nanotechnology on the cellular level.The embedding of technology inside ourselves is already a reality and using machines for body improvement is commonplace (i.e. lasers in surgery). Though we may not become cyborgs tomorrow, a future where we have intelligence inside ourselves that can monitor and heal is a very real possibility.
In wearable tech, devices become integrated into our clothes or take the shape of accessories, i.e. bluetooth headsets, smart watches, google glass and VR headsets are all augmenting our bodies to different extents. Smart contact lenses are an example of a way we can integrate technologies that are minimally invasive, without obstructing our usual interactions, and seamlessly integrating with our bodies.
Google’s Smart Contact Lens is a technology that actually promises to solve the non-invasive glucose monitoring problem for diabetics, by measuring blood glucose levels via tear fluid. This is an example of the growing trend toward adapting technology to our bodies as opposed to expecting people to adapt to interfaces. When we start thinking about bodies as interfaces, we start designing more intuitive, immersive and meaningful experiences that fundamentally shifts the way we interact with tech, and the role it plays in our lives and health.
Feature Image is The Google Smart Contact Lens Courtesy of Tech Times