Use of Augmented Reality (AR) to assist wayfinding in built environments, Part 2: AR in the wider context of emerging technologies and ‘Zero UI’
In our previous blog post we introduced some of the reasons Liv Systems are interested in exploring the use of AR to assist wayfinding in built environments and gave a brief overview of our own internal research project on this topic.
In this post we’ll discuss how AR aligns with some of the wider developments in emerging digital technologies and the types of user experience they enable.
We are currently living in exciting times in terms of the developments in a number of emerging digital technologies, in particular AR, VR and AI.
These technologies have had their ups and downs over the years and both VR and AI suffered from some false starts, possibly due to being over-hyped, especially in the case of AI, together with earlier limitations in computer processing power. Yet even during the ‘AI winter’ research on AI progressed and thanks to greatly improved computing power over the last 2 decades, together with the new approach to AI in the form of neural networks, AI is now having an enormous impact. VR is also showing far more promise than before, thanks to the advances in computer graphics made possible by improvements in graphics processor units (GPUs) which have been largely driven by the computer gaming industry. AR has as yet to have an equally big impact, yet the technology driving it is mostly the same as that required for VR and AR is starting to be used for a number of enterprise applications as well as to a more limited extent in the consumer market (for instance with the huge popularity of ‘Pokémon Go’).
These technologies should be of great interest to human factors and UX (user experience) professionals, as they are set to fundamentally transform how we interact with the digital world and in the case of AR, even transform how we interact with the real world.
They will achieve this transformation through greatly increasing the bandwidth of the human-system interface. This will be accomplished by providing the basis for a move towards what has become known as ‘Zero UI’. A better phrase might be ‘invisible UI’ as the concept behind Zero UI is that the user interacts directly with digital information and only has minimal awareness of the graphical user interface (GUI) and physical devices required to achieve that interaction. The user interface therefore becomes in essence, invisible.
A well-known example of Zero UI from popular culture is ‘Jarvis’ from the film Iron Man. Jarvis is Tony Stark’s Virtual Personal Assistant (VPA) and AR based system.
Of course Jarvis packs a lot of cinematic wow factor and real world Zero UI is not likely to be as spectacular, however Jarvis does present a vision of how interaction with information systems can be provided in the future by interacting directly with digital information which is both presented as visually overlaid onto the real world view and through natural language based interaction with an AI driven virtual personal assistant.
Natural language UIs are already in widespread use, with the popularity of Alexa, Siri and Google Assistant, in addition to the increasing use of ‘Chatbots’. These systems have been made possible thanks to a combination of greatly improved natural language processing (enabled by improvements in AI based pattern recognition) together with the neural network based AI which has enabled these systems to get better and better at responding appropriately to our verbal requests.
Whilst people may have at first felt a bit weird about talking to their computers or smart phones, the enormous popularity of stand-alone devices, such as the Amazon Echo and the Google Home demonstrates that many of us do like talking to our computers after all!
When natural language systems perform effectively, they can perform a task, such as playing a song we want to hear, or provide us with some transport related information such as a journey plan, based on a simple verbalised enquiry which may take just a few effortless seconds. Using a more traditional GUI based interaction, to achieve the same results may require multiple clicks, gestures and text inputs on a screen-based system, which is generally more time consuming and requires greater cognitive effort. In contrast merely talking to a computer is more intuitive, less effortful and requires virtually no computer literacy. Hence ironically the more advanced UIs become and the more we move towards achieving Zero UI systems, the more accessible such systems may become to the less tech savvy.
Another big advantage of Zero UI is it will allow us to escape from our screens whilst retaining the rapid access to the digital world we have become accustomed to. Optimists may hope this will enable us to engage more with the real world around us again. Such optimism could prove unfounded, but at least we may be less likely to carelessly step out in front of traffic or walk into lamp posts or each other! Also, it will give our eyes a much needed rest.
In our next blog post we’ll discuss what AR’s contributions are likely to be within this wider context of emerging technologies, the technology hardware used for AR and some of the possible application areas of AR.
We’d be delighted to hear from you if you’d like to discuss any of these topics with us, share ideas or find out more about our work.
Leave a Reply
Want to join the discussion?Feel free to contribute!