In our previous blog post we discussed the wider context of emerging digital technologies, and how these technologies will contribute to the development of ‘Zero UI’ based interaction. In this post we’ll look more closely at the role AR will play in this, some of the associated hardware technologies and the differences in the resultant user experience.
At present, AR is most commonly being introduced for both enterprise and consumer application based on using smart phone or tablet devices. Such devices, because of their widespread use and technology maturity are the obvious choice for hosting AR software and apps. Indeed, Liv Systems’ current internal project looking at the use of AR as a tool to aid navigation in built environments is based on a prototype app to be hosted on smart phones or tablets. However, whilst these devices may be able to host AR reasonably effectively, it is likely that the most powerful and effective AR experiences will be delivered by smart glasses-based systems.
AR using smart glasses is starting to be trialled and used by many enterprises for a variety of applications. These include collaborative working on designs, presented as virtual objects placed in the real world, and maintenance activities in which digital information is overlaid onto real world objects. For instance, the wiring within an electrical cabinet could be overlaid onto the users’ view of the cabinet with the cabinet still closed, or perhaps the wiring could have digital information overlaid onto it, providing digital labelling and explanatory information.
Perhaps the most widely used AR smart glasses for enterprise use is currently the Microsoft HoloLens which supports the Microsoft Mixed Reality platform.
Whilst smart glasses such as the HoloLens are fine for certain types of enterprise use and also potentially for home based use, in the long term for people to make use of smart glasses in their daily lives, we need to be moving more towards the type of smart glasses which are unobtrusive and look almost exactly like conventional glasses.
Google attempted to introduce smart glasses to the consumer market a few years ago, but they were not very successful due to privacy concerns associated with the built in camera and furthermore they were not a very elegant design, so again, regardless of privacy concerns they would have been unlikely to appeal to consumers outside of a very niche market. Over the last few years though there have been efforts to design smart glasses which could appeal to a wider market and in the last year a US and Canada based company ‘North’ has developed a viable consumer product in their ‘Focals’ smart glasses.
These smart glasses feature a laser projected AR display, a built-in mic and speaker and a device known as the ‘loop’ which is worn as a ring with a tiny joystick functioning as an input device. The app support for the glasses can display message and appointment notifications, calendar, directional and weather information, texts and be used to order Ubers. They are coupled with Alexa and texting is supported by voice-to-text.
Interestingly AR based wayfinding assistance is one of the features already developed for the Focals smart glasses, providing further evidence that there is considerable interest in this as one of the most useful application areas for general consumers.
Smart glasses technology is still in its early days but as these technologies mature and provided their cost and aesthetic appeal proves acceptable to the public, it’s not unreasonable to expect the uptake of such devices could become widespread, providing the basis for the widespread use of AR based software and apps by the general public.
AR hosted on smart glasses will demonstrate a number of user experience benefits in comparison to AR provided by smart phones or tablets:
- It will be hands free
- It will provide a far greater field of view
- It will be able to exploit stereoscopic vision and hence make use of depth of field
- Digital information would be overlaid onto our view of the world in such a way that we would not need to look at a specific device to get the AR view we want; hence the AR information would be integrated more seamlessly with our visual sensory input.
- It will be more immersive and hence offer a more compelling user experience
- When combined with natural language-based user interaction, it will enable a user experience which is closer to the concept of ‘Zero UI’ .
Until such time as smart glasses are widely available, it is nevertheless worthwhile to explore the potential of AR using smart phone or tablet devices. This is because many people may continue to prefer such devices, smart phones still offer the potential to provide interesting and effective AR experiences and furthermore much of what we learn from developing AR for smart phones will be applicable to AR provided by smart glasses.
In our next blog post we’ll go into detail regarding our own research project on the use of AR for wayfinding in built environments, including exploring different approaches to providing wayfinding assistance using AR and the potential benefits to managers of built environment infrastructure which may be provided by the use of AR by their customers/users.
As always, please feel free to get in touch as we’re keen to discuss these topics with industry professionals and academics who are as excited about the potential of these technologies as we are.
In our previous blog post we introduced some of the reasons Liv Systems are interested in exploring the use of AR to assist wayfinding in built environments and gave a brief overview of our own internal research project on this topic.
In this post we’ll discuss how AR aligns with some of the wider developments in emerging digital technologies and the types of user experience they enable.
We are currently living in exciting times in terms of the developments in a number of emerging digital technologies, in particular AR, VR and AI.
These technologies have had their ups and downs over the years and both VR and AI suffered from some false starts, possibly due to being over-hyped, especially in the case of AI, together with earlier limitations in computer processing power. Yet even during the ‘AI winter’ research on AI progressed and thanks to greatly improved computing power over the last 2 decades, together with the new approach to AI in the form of neural networks, AI is now having an enormous impact. VR is also showing far more promise than before, thanks to the advances in computer graphics made possible by improvements in graphics processor units (GPUs) which have been largely driven by the computer gaming industry. AR has as yet to have an equally big impact, yet the technology driving it is mostly the same as that required for VR and AR is starting to be used for a number of enterprise applications as well as to a more limited extent in the consumer market (for instance with the huge popularity of ‘Pokémon Go’).
These technologies should be of great interest to human factors and UX (user experience) professionals, as they are set to fundamentally transform how we interact with the digital world and in the case of AR, even transform how we interact with the real world.
They will achieve this transformation through greatly increasing the bandwidth of the human-system interface. This will be accomplished by providing the basis for a move towards what has become known as ‘Zero UI’. A better phrase might be ‘invisible UI’ as the concept behind Zero UI is that the user interacts directly with digital information and only has minimal awareness of the graphical user interface (GUI) and physical devices required to achieve that interaction. The user interface therefore becomes in essence, invisible.
A well-known example of Zero UI from popular culture is ‘Jarvis’ from the film Iron Man. Jarvis is Tony Stark’s Virtual Personal Assistant (VPA) and AR based system.
Of course Jarvis packs a lot of cinematic wow factor and real world Zero UI is not likely to be as spectacular, however Jarvis does present a vision of how interaction with information systems can be provided in the future by interacting directly with digital information which is both presented as visually overlaid onto the real world view and through natural language based interaction with an AI driven virtual personal assistant.
Natural language UIs are already in widespread use, with the popularity of Alexa, Siri and Google Assistant, in addition to the increasing use of ‘Chatbots’. These systems have been made possible thanks to a combination of greatly improved natural language processing (enabled by improvements in AI based pattern recognition) together with the neural network based AI which has enabled these systems to get better and better at responding appropriately to our verbal requests.
Whilst people may have at first felt a bit weird about talking to their computers or smart phones, the enormous popularity of stand-alone devices, such as the Amazon Echo and the Google Home demonstrates that many of us do like talking to our computers after all!
When natural language systems perform effectively, they can perform a task, such as playing a song we want to hear, or provide us with some transport related information such as a journey plan, based on a simple verbalised enquiry which may take just a few effortless seconds. Using a more traditional GUI based interaction, to achieve the same results may require multiple clicks, gestures and text inputs on a screen-based system, which is generally more time consuming and requires greater cognitive effort. In contrast merely talking to a computer is more intuitive, less effortful and requires virtually no computer literacy. Hence ironically the more advanced UIs become and the more we move towards achieving Zero UI systems, the more accessible such systems may become to the less tech savvy.
Another big advantage of Zero UI is it will allow us to escape from our screens whilst retaining the rapid access to the digital world we have become accustomed to. Optimists may hope this will enable us to engage more with the real world around us again. Such optimism could prove unfounded, but at least we may be less likely to carelessly step out in front of traffic or walk into lamp posts or each other! Also, it will give our eyes a much needed rest.
In our next blog post we’ll discuss what AR’s contributions are likely to be within this wider context of emerging technologies, the technology hardware used for AR and some of the possible application areas of AR.
We’d be delighted to hear from you if you’d like to discuss any of these topics with us, share ideas or find out more about our work.
Here at Liv Systems we’re currently running an internal project investigating the use of Augmented Reality (AR) as an aid to wayfinding within built environments. AR will assist wayfinding by overlaying digital navigational information onto the real-world view. This is part of our wider strategy to explore the applications and User Experience (UX) of emerging digital technologies, in particular AR, Virtual Reality (VR) and AI.
As such we’ll be running a series of blog posts related to how we believe AR can be used as a powerful and intuitive tool to aid wayfinding within built environments, transforming the task of finding our way around such environments from the disorientating and stressful experience it often is, with the current reliance on signage, or use of map based navigation apps, to a far less frustrating, greatly simplified task.
We’ll be approaching this topic not only from the point of view of the technology solutions available, but also based on a focus on the cognitive aspects of human navigation and the UX issues associated with designing effective AR navigation aids.
Simplifying the wayfinding task in unfamiliar environments, will benefit a wide range of users, including users with impairments which may make wayfinding particularly challenging. For instance, a wide range of cognitive impairments, including impairments associated with early stage dementia, together with other issues such as autism spectrum disorder and some anxiety related conditions can make wayfinding in unfamiliar environments enormously difficult and stressful. In fact deterioration of navigation ability is known to be one of the earliest signs of dementia and as such tests of navigation skills, using VR environments, are now being introduced as a method of early warning detection of the onset of dementia.
Such cognitive impairments and neurological or psychological conditions, effectively act as a barrier for many people to making journeys to destinations in which they feel they may encounter such situations. For this reason, in addition to being of great benefit to us all, AR assisted navigation offers the promise of removing barriers to travel for many people and allowing them to retain their independence.
I am an active participant in the ‘Cognitive Navigation’ (CogNav) Special Interest Group https://rin.org.uk/page/CogNav of the Royal Institute of Navigation. The CogNav group is chaired by Professor Kate Jeffery, who is a behavioural neuroscientist at UCL specialising in the neuroscience of navigation. The CogNav group brings together academics and industry professionals from a variety of backgrounds including, neuroscience, cognitive psychology, industrial design, human factors and architecture, with the aim to better understand the cognitive aspects of human navigation so as to best design solutions to assist that.
As a result of my involvement in the CogNav group Liv Systems are able to bring the latest knowledge and research findings in this area to our exploration of the use of AR for wayfinding.
AR can be used to assist wayfinding in any large, complex built environment and the wider urban realm. Large built environments that can pose significant wayfinding issues include shopping malls, hospitals, museums, rail stations and airports.
We have already produced some early stage prototypes for concept development and are now in the process of developing a fully interactive navigational AR prototype. We are planning on conducting user research with our AR prototype within a large shopping mall in the near future.
In our next blog post we’ll discuss how AR fits into the wider context of emerging digital technologies and how these technologies may provide the basis for a form of user interaction that has become known as ‘Zero UI’.
Please get in touch if you are interested in our work in this area and would like to know more.