Fatigue Management Toolbox Talk

Balanced Integration of COTS and Human Factors: An Evolved Approach

In our previous blog posts, we explored the benefits and limitations of integrating Commercial-Off-The-Shelf (COTS) systems, with a particular emphasis on Human Factors Engineering. We highlighted the need for a delicate balance between the advantages of COTS and the criticality of Human Factors Integration (HFI) to ensure successful system performance and user satisfaction. In this blog post, we delve into an evolved approach known as “Balanced Integration of COTS” (BIC). The BIC approach addresses the challenges of COTS integration by emphasizing flexible adaptation and contextual understanding. Join us as we uncover the steps of the BIC approach and its potential to optimize both technical assurance and human-system interaction.

The BIC Approach: The Balanced Integration of COTS (BIC) approach seeks to harmonize COTS products with the principles of Human Factors Integration, recognizing the inherent limitations of COTS systems and the specific context of use. By doing so, the BIC approach strives to achieve an optimal fit between COTS products and complex technical systems.

Step 1: Context of Use Comparison The BIC approach commences with an initial analysis that compares the assumptions inherent in the COTS product’s context of use with the specific context of the new system. Factors such as user skills, task scenarios, operational environment, and pace of operations are evaluated to judge the extent to which the assumptions made for the COTS product apply to the new context.

Step 2: Product Baseline Establishing the COTS product baseline is a fundamental step in the BIC approach. This includes a comprehensive understanding of the product’s features, limitations, previous uses, approvals, and the original context of use assumptions. This baseline serves as a crucial reference point for subsequent analyses.

Step 3: Gap Analysis The BIC approach incorporates a gap analysis based on the context of use comparison and the product baseline. By identifying the differences between the COTS product’s capabilities and assumptions and the needs of the new context, potential issues and challenges can be pinpointed for further consideration.

Step 4: Flexible Adaptation Plan Building on the results of the gap analysis, the BIC approach develops a flexible adaptation plan to address the identified gaps. This plan may involve adapting the COTS product (where feasible), modifying other elements of the system (e.g., procedures or training), or a combination of both. The plan is designed to be flexible, taking into account any constraints identified in the contextual analysis.

Step 5: Continuous Evaluation Once the COTS product is integrated into the system, continuous monitoring and evaluation become paramount in the BIC approach. This ongoing evaluation provides feedback that can be used to make further adaptations and improvements, ensuring that the product continues to meet user needs and contributes positively to overall system performance.

Challenges and Future Work: The BIC approach, with its emphasis on flexible adaptation and contextual understanding, addresses common problems in traditional COTS integration approaches. However, challenges remain, particularly in striking the right balance between assurance activities and efficiency. While COTS products are expected to require less assurance due to their prior use and validation, the BIC approach incorporates a reasonably detailed HFI activity, which may initially seem counterintuitive.

Future work will focus on refining the BIC approach to better balance assurance activities with efficiency. By fine-tuning the approach and integrating it with practical constraints, organizations can achieve a more comprehensive and effective integration of COTS systems while maintaining the benefits they offer.

Conclusion: The Balanced Integration of COTS (BIC) approach stands as a promising evolution in harmonizing COTS products with the principles of Human Factors Integration. By considering the specific context of use and acknowledging the limitations of COTS systems, the BIC approach optimizes both technical assurance and human-system interaction. Continuous evaluation ensures that the integration remains attuned to user needs and adapts to dynamic operational environments. While challenges persist, the BIC approach opens new possibilities for organizations seeking the best of both worlds – the cost-effectiveness and quick deployment of COTS products, coupled with the paramount focus on user-centric design and system performance. In our final blog post of this series, we will conclude our exploration with a synthesis of the key insights and the way forward in achieving a successful and balanced integration of COTS and Human Factors. Join us as we bring this journey to a compelling close, one that resonates with the core principles of technical engineering excellence.

The Benefits and Limitations of COTS in Human Factors Integration

In our previous blog post, we explored the challenges and significance of Human Factors Integration (HFI) in the context of Commercial-Off-The-Shelf (COTS) systems. Now, we delve deeper into the promise of COTS, discussing how HFI can (in theory) unlock the proposed benefits.

While COTS systems offer cost-effectiveness and quick deployment, we must also acknowledge the inherent trade-offs in their integration. This blog post will shed light on the benefits of COTS, as well as the potential challenges they bring when harmonising with Human Factors principles.

The proliferation of COTS systems can be attributed to several compelling advantages that make them appealing to organizations across various technical domains.

1. Cost-Effectiveness: COTS products offer an economical alternative to custom-built solutions. Their pre-existing design and widespread availability mean reduced development costs and faster implementation, saving valuable time and resources for businesses and projects.

2. Quick Deployment: When time is of the essence, COTS products provide a ready-to-use solution that can be swiftly integrated into existing systems. This is particularly beneficial in scenarios where organizations need to deploy new capabilities or replace outdated systems promptly.

3. Proven Track Record: COTS products have often been employed across diverse industries, accumulating a track record of successful use. This history of prior deployment can instill confidence in their reliability and effectiveness, providing organizations with an assurance of their suitability.

4. Vendor Support and Maintenance: Established vendors typically offer technical support, maintenance, and regular updates for their COTS products. This relieves organizations of the burden of maintaining the product in-house, especially for those lacking the necessary expertise.

Limitations of COTS in Relation to Human Factors Integration

Despite their benefits, COTS systems are not without their limitations, especially concerning their integration within complex technical environments.

1. Limited Flexibility: COTS products are designed to cater to a wide market, making them inherently less flexible when compared to bespoke solutions. Their fixed design may not always align perfectly with the specific needs and context of a particular system, necessitating adaptations that may be challenging to achieve.

2. Assumption vs. Reality Mismatch: While COTS products may come with a proven track record, it is essential to recognize that their past performance or approval may not directly translate to success in new contexts. Assumptions made based on prior use or approval may overlook critical factors in the specific context of use, leading to potential inefficiencies or usability challenges.

3. Overlooking Specific Context of Use: Approaches such as Existing Product Approvals and Grandfather Rights may inadvertently overlook the specific context of use for a COTS product. Reliance on previous approvals or extended use history may not consider the unique operational environment, user characteristics, or tasks, introducing new challenges that were not considered during initial approvals.

Balancing COTS and HFI

The use of COTS products brings inherent trade-offs in terms of flexibility and alignment with Human Factors principles. Striking the right balance between the advantages of COTS and the criticality of HFI is paramount for successful integration. In our next blog post, we will present an evolved approach, “Balanced Integration of COTS” (BIC), that seeks to address these challenges by emphasizing flexible adaptation and contextual understanding. By considering the specific context of use and the limitations of COTS products, organizations can optimize both technical assurance and human-system interaction, leading to enhanced system performance and user satisfaction. Stay tuned for an in-depth exploration of the BIC approach and its potential for harmonizing COTS and HFI in technical industries.

COTS systems offer compelling advantages, including cost-effectiveness, quick deployment, and a proven track record. However, their integration within complex technical systems requires careful consideration of their inherent limitations, such as limited flexibility and potential assumption vs. reality mismatch. As technical industries continue to embrace COTS products, it becomes increasingly vital to strike a balance between their benefits and the criticality of Human Factors Integration. In the next installment of our blog series, we will present an evolved approach, the “Balanced Integration of COTS,” that addresses these challenges and provides valuable insights into optimizing the integration process. Join us as we delve into this nuanced realm, where the alignment of technology and human factors fosters success in complex engineering endeavors.

Use of Augmented Reality (AR) to assist wayfinding in built environments, Part 3: AR technologies and associated user experiences

Nigel Scard

Principal Human Factors Specialist

In our previous blog post we discussed the wider context of emerging digital technologies, and how these technologies will contribute to the development of ‘Zero UI’ based interaction. In this post we’ll look more closely at the role AR will play in this, some of the associated hardware technologies and the differences in the resultant user experience.

At present, AR is most commonly being introduced for both enterprise and consumer application based on using smart phone or tablet devices. Such devices, because of their widespread use and technology maturity are the obvious choice for hosting AR software and apps. Indeed, Liv Systems’ current internal project looking at the use of AR as a tool to aid navigation in built environments is based on a prototype app to be hosted on smart phones or tablets. However, whilst these devices may be able to host AR reasonably effectively, it is likely that the most powerful and effective AR experiences will be delivered by smart glasses-based systems.

AR using smart glasses is starting to be trialled and used by many enterprises for a variety of applications. These include collaborative working on designs, presented as virtual objects placed in the real world, and maintenance activities in which digital information is overlaid onto real world objects. For instance, the wiring within an electrical cabinet could be overlaid onto the users’ view of the cabinet with the cabinet still closed, or perhaps the wiring could have digital information overlaid onto it, providing digital labelling and explanatory information.

Perhaps the most widely used AR smart glasses for enterprise use is currently the Microsoft HoloLens which supports the Microsoft Mixed Reality platform.

The Microsoft HoloLens

 

Example use case for the HoloLens

Whilst smart glasses such as the HoloLens are fine for certain types of enterprise use and also potentially for home based use, in the long term for people to make use of smart glasses in their daily lives, we need to be moving more towards the type of smart glasses which are unobtrusive and look almost exactly like conventional glasses.

Google attempted to introduce smart glasses to the consumer market a few years ago, but they were not very successful due to privacy concerns associated with the built in camera and furthermore they were not a very elegant design, so again, regardless of privacy concerns they would have been unlikely to appeal to consumers outside of a very niche market. Over the last few years though there have been efforts to design smart glasses which could appeal to a wider market and in the last year a US and Canada based company ‘North’ has developed a viable consumer product in their ‘Focals’ smart glasses.

These smart glasses feature a laser projected AR display, a built-in mic and speaker and a device known as the ‘loop’ which is worn as a ring with a tiny joystick functioning as an input device. The app support for the glasses can display message and appointment notifications, calendar, directional and weather information, texts and be used to order Ubers. They are coupled with Alexa and texting is supported by voice-to-text.

 

‘Focals by North’

Interestingly AR based wayfinding assistance is one of the features already developed for the Focals smart glasses, providing further evidence that there is considerable interest in this as one of the most useful application areas for general consumers.

 

Directional information AR provided by ’Focals by North’

Smart glasses technology is still in its early days but as these technologies mature and provided their cost and aesthetic appeal proves acceptable to the public, it’s not unreasonable to expect the uptake of such devices could become widespread, providing the basis for the widespread use of AR based software and apps by the general public.

AR hosted on smart glasses will demonstrate a number of user experience benefits in comparison to AR provided by smart phones or tablets:

  • It will be hands free
  • It will provide a far greater field of view
  • It will be able to exploit stereoscopic vision and hence make use of depth of field
  • Digital information would be overlaid onto our view of the world in such a way that we would not need to look at a specific device to get the AR view we want; hence the AR information would be integrated more seamlessly with our visual sensory input.
  • It will be more immersive and hence offer a more compelling user experience
  • When combined with natural language-based user interaction, it will enable a user experience which is closer to the concept of ‘Zero UI’ .

Until such time as smart glasses are widely available, it is nevertheless worthwhile to explore the potential of AR using smart phone or tablet devices. This is because many people may continue to prefer such devices, smart phones still offer the potential to provide interesting and effective AR experiences and furthermore much of what we learn from developing AR for smart phones will be applicable to AR provided by smart glasses.

In our next blog post we’ll go into detail regarding our own research project on the use of AR for wayfinding in built environments, including exploring different approaches to providing wayfinding assistance using AR and the potential benefits to managers of built environment infrastructure which may be provided by the use of AR by their customers/users.

As always, please feel free to get in touch as we’re keen to discuss these topics with industry professionals and academics who are as excited about the potential of these technologies as we are.

Use of Augmented Reality (AR) to assist wayfinding in built environments, Part 2: AR in the wider context of emerging technologies and ‘Zero UI’

In our previous blog post we introduced some of the reasons Liv Systems are interested in exploring the use of AR to assist wayfinding in built environments and gave a brief overview of our own internal research project on this topic.

In this post we’ll discuss how AR aligns with some of the wider developments in emerging digital technologies and the types of user experience they enable.

We are currently living in exciting times in terms of the developments in a number of emerging digital technologies, in particular AR, VR and AI.

These technologies have had their ups and downs over the years and both VR and AI suffered from some false starts, possibly due to being over-hyped, especially in the case of AI, together with earlier limitations in computer processing power. Yet even during the ‘AI winter’ research on AI progressed and thanks to greatly improved computing power over the last 2 decades, together with the new approach to AI in the form of neural networks, AI is now having an enormous impact.  VR is also showing far more promise than before, thanks to the advances in computer graphics made possible by improvements in graphics processor units (GPUs) which have been largely driven by the computer gaming industry. AR has as yet to have an equally big impact, yet the technology driving it is mostly the same as that required for VR and AR is starting to be used for a number of enterprise applications as well as to a more limited extent in the consumer market (for instance with the huge popularity of ‘Pokémon Go’).

These technologies should be of great interest to human factors and UX (user experience) professionals, as they are set to fundamentally transform how we interact with the digital world and in the case of AR, even transform how we interact with the real world.

They will achieve this transformation through greatly increasing the bandwidth of the human-system interface. This will be accomplished by providing the basis for a move towards what has become known as ‘Zero UI’. A better phrase might be ‘invisible UI’ as the concept behind Zero UI is that the user interacts directly with digital information and only has minimal awareness of the graphical user interface (GUI) and physical devices required to achieve that interaction. The user interface therefore becomes in essence, invisible.

A well-known example of Zero UI from popular culture is ‘Jarvis’ from the film Iron Man. Jarvis is Tony Stark’s Virtual Personal Assistant (VPA) and AR based system.

Tony Stark with ‘Jarvis’ from the film ‘Iron Man’

Of course Jarvis packs a lot of cinematic wow factor and real world Zero UI is not likely to be as spectacular, however Jarvis does present a vision of how interaction with information systems can be provided in the future by interacting directly with digital information which is both presented as visually overlaid onto the real world view and through natural language based interaction with an AI driven virtual personal assistant.

Natural language UIs are already in widespread use, with the popularity of Alexa, Siri and Google Assistant, in addition to the increasing use of ‘Chatbots’.  These systems have been made possible thanks to a combination of greatly improved natural language processing (enabled by improvements in AI based pattern recognition) together with the neural network based AI which has enabled these systems to get better and better at responding appropriately to our verbal requests.

Whilst people may have at first felt a bit weird about talking to their computers or smart phones, the enormous popularity of stand-alone devices, such as the Amazon Echo and the Google Home demonstrates that many of us do like talking to our computers after all!

When natural language systems perform effectively, they can perform a task, such as playing a song we want to hear, or provide us with some transport related information such as a journey plan, based on a simple verbalised enquiry which may take just a few effortless seconds. Using a more traditional GUI based interaction, to achieve the same results may require multiple clicks, gestures and text inputs on a screen-based system, which is generally more time consuming and requires greater cognitive effort. In contrast merely talking to a computer is more intuitive, less effortful and requires virtually no computer literacy. Hence ironically the more advanced UIs become and the more we move towards achieving Zero UI systems, the more accessible such systems may become to the less tech savvy.

Another big advantage of Zero UI is it will allow us to escape from our screens whilst retaining the rapid access to the digital world we have become accustomed to. Optimists may hope this will enable us to engage more with the real world around us again. Such optimism could prove unfounded, but at least we may be less likely to carelessly step out in front of traffic or walk into lamp posts or each other! Also, it will give our eyes a much needed rest.

In our next blog post we’ll discuss what AR’s contributions are likely to be within this wider context of emerging technologies, the technology hardware used for AR and some of the possible application areas of AR.

We’d be delighted to hear from you if you’d like to discuss any of these topics with us, share ideas or find out more about our work.