No UI, No Problem

UI – short for user interface – is a big priority to designers when creating most pieces of technology. However with upcoming apps like Magic and Facebook M, it looks like companies are starting to make steps towards more UI-less devices or, as some others call it, zero UI. Zero UI is defined as being a paradigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through out environment.

Magic, as well as other apps, merely looks like a messaging app on the surface, but it is actually heavily equipped with Artificial Intelligence (AI) technology to help us complete tasks, like finding a place to eat. And while this doesn’t seem like a huge feat, it is actually quite crucial. These apps are essentially training AI. Messaging actually helps these computers to better understand humans the best because there is less to interpret. In the current stage of AI development, computers are just now starting to see images and hear. These computers learn much like a child and use observation and experience to learn about the world around them.

Zero UI is actually not a brand new concept. Amazon’s Echo is a good example of using zero UI to make it function via voice recognition. Nest is also a good example of zero UI, and one that shows that AI is capable of learning and anticipating – it tracks your behaviors while using it and tries to anticipate your actions for you in the future. But what companies like Google are doing is expanding these concepts even further. Google’s Project Soli (see video below) is allowing AI to help us interact with objects more naturally rather than just hitting a few buttons on a touchscreen. Soli is a project that analyzes certain hand gestures that gets translated into certain actions. For example, if you were to move your hand like you were turning a dial on the radio, the device’s volume might go up or down depending on which direction you are rotating your hand.

Zero UI interfaces are supposed to minimize time, money, physical effort, social deviance, and non-routine. What AI is being honed to do today is to make a UI-less interface that is more automatic and predictive for you in the future to achieve those goals. And in a decade or two from now, it is very possible that our devices will probably know us better than we do.