Use of Augmented Reality (AR) to assist wayfinding in built environments, Part 2: AR in the wider context of emerging technologies and ‘Zero UI’

In our previous blog post we introduced some of the reasons Liv Systems are interested in exploring the use of AR to assist wayfinding in built environments and gave a brief overview of our own internal research project on this topic.

In this post we’ll discuss how AR aligns with some of the wider developments in emerging digital technologies and the types of user experience they enable.

We are currently living in exciting times in terms of the developments in a number of emerging digital technologies, in particular AR, VR and AI.

These technologies have had their ups and downs over the years and both VR and AI suffered from some false starts, possibly due to being over-hyped, especially in the case of AI, together with earlier limitations in computer processing power. Yet even during the ‘AI winter’ research on AI progressed and thanks to greatly improved computing power over the last 2 decades, together with the new approach to AI in the form of neural networks, AI is now having an enormous impact.  VR is also showing far more promise than before, thanks to the advances in computer graphics made possible by improvements in graphics processor units (GPUs) which have been largely driven by the computer gaming industry. AR has as yet to have an equally big impact, yet the technology driving it is mostly the same as that required for VR and AR is starting to be used for a number of enterprise applications as well as to a more limited extent in the consumer market (for instance with the huge popularity of ‘Pokémon Go’).

These technologies should be of great interest to human factors and UX (user experience) professionals, as they are set to fundamentally transform how we interact with the digital world and in the case of AR, even transform how we interact with the real world.

They will achieve this transformation through greatly increasing the bandwidth of the human-system interface. This will be accomplished by providing the basis for a move towards what has become known as ‘Zero UI’. A better phrase might be ‘invisible UI’ as the concept behind Zero UI is that the user interacts directly with digital information and only has minimal awareness of the graphical user interface (GUI) and physical devices required to achieve that interaction. The user interface therefore becomes in essence, invisible.

A well-known example of Zero UI from popular culture is ‘Jarvis’ from the film Iron Man. Jarvis is Tony Stark’s Virtual Personal Assistant (VPA) and AR based system.


Tony Stark with ‘Jarvis’ from the film ‘Iron Man’

Of course Jarvis packs a lot of cinematic wow factor and real world Zero UI is not likely to be as spectacular, however Jarvis does present a vision of how interaction with information systems can be provided in the future by interacting directly with digital information which is both presented as visually overlaid onto the real world view and through natural language based interaction with an AI driven virtual personal assistant.

Natural language UIs are already in widespread use, with the popularity of Alexa, Siri and Google Assistant, in addition to the increasing use of ‘Chatbots’.  These systems have been made possible thanks to a combination of greatly improved natural language processing (enabled by improvements in AI based pattern recognition) together with the neural network based AI which has enabled these systems to get better and better at responding appropriately to our verbal requests.

Whilst people may have at first felt a bit weird about talking to their computers or smart phones, the enormous popularity of stand-alone devices, such as the Amazon Echo and the Google Home demonstrates that many of us do like talking to our computers after all!

When natural language systems perform effectively, they can perform a task, such as playing a song we want to hear, or provide us with some transport related information such as a journey plan, based on a simple verbalised enquiry which may take just a few effortless seconds. Using a more traditional GUI based interaction, to achieve the same results may require multiple clicks, gestures and text inputs on a screen-based system, which is generally more time consuming and requires greater cognitive effort. In contrast merely talking to a computer is more intuitive, less effortful and requires virtually no computer literacy. Hence ironically the more advanced UIs become and the more we move towards achieving Zero UI systems, the more accessible such systems may become to the less tech savvy.

Another big advantage of Zero UI is it will allow us to escape from our screens whilst retaining the rapid access to the digital world we have become accustomed to. Optimists may hope this will enable us to engage more with the real world around us again. Such optimism could prove unfounded, but at least we may be less likely to carelessly step out in front of traffic or walk into lamp posts or each other! Also, it will give our eyes a much needed rest.

In our next blog post we’ll discuss what AR’s contributions are likely to be within this wider context of emerging technologies, the technology hardware used for AR and some of the possible application areas of AR.

We’d be delighted to hear from you if you’d like to discuss any of these topics with us, share ideas or find out more about our work.

Use of Augmented Reality (AR) to assist wayfinding in built environments: Introduction

Here at Liv Systems we’re currently running an internal project investigating the use of Augmented Reality (AR) as an aid to wayfinding within built environments. AR will assist wayfinding by overlaying digital navigational information onto the real-world view. This is part of our wider strategy to explore the applications and User Experience (UX) of emerging digital technologies, in particular AR, Virtual Reality (VR) and AI.

As such we’ll be running a series of blog posts related to how we believe AR can be used as a powerful and intuitive tool to aid wayfinding within built environments, transforming the task of finding our way around such environments from the disorientating and stressful experience it often is, with the current reliance on signage, or use of map based navigation apps, to a far less frustrating, greatly simplified task.

We’ll be approaching this topic not only from the point of view of the technology solutions available, but also based on a focus on the cognitive aspects of human navigation and the UX issues associated with designing effective AR navigation aids.

An illustration of how directional arrow and animated character based navigational information could be overlaid onto the real-world view at a London Underground station


Simplifying the wayfinding task in unfamiliar environments, will benefit a wide range of users, including users with impairments which may make wayfinding particularly challenging. For instance, a wide range of cognitive impairments, including impairments associated with early stage dementia, together with other issues such as autism spectrum disorder and some anxiety related conditions can make wayfinding in unfamiliar environments enormously difficult and stressful. In fact deterioration of navigation ability is known to be one of the earliest signs of dementia and as such tests of navigation skills, using VR environments, are now being introduced as a method of early warning detection of the onset of dementia.

Such cognitive impairments and neurological or psychological conditions, effectively act as a barrier for many people to making journeys to destinations in which they feel they may encounter such situations. For this reason, in addition to being of great benefit to us all, AR assisted navigation offers the promise of removing barriers to travel for many people and allowing them to retain their independence.

I am an active participant in the ‘Cognitive Navigation’  (CogNav) Special Interest Group of the Royal Institute of Navigation. The CogNav group is chaired by Professor Kate Jeffery, who is a behavioural neuroscientist at UCL specialising in the neuroscience of navigation. The CogNav group brings together academics and industry professionals from a variety of backgrounds including, neuroscience, cognitive psychology, industrial design, human factors and architecture, with the aim to better understand the cognitive aspects of human navigation so as to best design solutions to assist that.

As a result of my involvement in the CogNav group Liv Systems are able to bring the latest knowledge and research findings in this area to our exploration of the use of AR for wayfinding.

AR can be used to assist wayfinding in any large, complex built environment and the wider urban realm. Large built environments that can pose significant wayfinding issues include shopping malls, hospitals, museums, rail stations and airports.

We have already produced some early stage prototypes for concept development and are now in the process of developing a fully interactive navigational AR prototype. We are planning on conducting user research with our AR prototype within a large shopping mall in the near future.

A snapshot from one of our early prototypes, developed in, depicting the use of navigational arrows together with faded landmark overlays to assist in providing an overall sense of orientation

In our next blog post we’ll discuss how AR fits into the wider context of emerging digital technologies and how these technologies may provide the basis for a form of user interaction that has become known as ‘Zero UI’.

Please get in touch if you are interested in our work in this area and would like to know more.