In “Smart glasses transform aircraft production” (Aerospace Manufacturing and Design, Jan/Feb 2017), I discussed how enterprise augmented reality (AR) delivered on wearables is helping aerospace manufacturers and MROs close that skills gap by delivering step-by-step instructions and workflows to workers right in their lines of sight. With instant access to real-time information, workers of all skill levels can efficiently complete complex tasks with minimal formal training.
As more and more aerospace companies and MROs incorporate AR into their digital transformation strategies, it’s important to evaluate the device options available and the underlying technology that will drive the overall experience for the workforce using those devices day-to-day.
AR finds a voice
Interaction paradigms for mobile devices and wearables have come and gone. Gestures, such as pinch-and-zoom have been perpetuated by the iPhone and other consumer devices. However, these aren’t ideal in industrial settings. Imagine you’re trying to service landing gear shock struts and wheel bearings while juggling equipment in one hand and while swiping at a tablet providing the work instructions. It can be awkward and distracting, and it creates inefficiencies for workers who need to keep their hands on equipment and eyes focused on the job. Fortunately, a new interaction paradigm has emerged: voice.
Voice-assistant devices – Apple’s Siri, Amazon’s Alexa, Google Home – are taking the consumer realm by storm. We may not find Alexa on a shop floor, but the speech recognition and natural language processing (NLP) technologies that power it are solidifying a spot in the enterprise. Gartner Research predicts that by 2020, natural-language generation and artificial intelligence (AI) will be standard on 90% of modern business intelligence platforms. Research and advisory firm Forrester says demand for developers who know how to build AR and NLP-based experiences will increase.
Voice in action
Boeing is one aerospace giant combining the power of voice with AR. With Upskill’s Skylight enterprise AR platform, technicians assembling complex wiring harnesses interact with the software on smart glasses using voice commands, remaining hands-free to perform their task. Uniting voice and AR has led to a 25% improvement in productivity and effectively reduced errors to zero.
Similarly, GE Aviation leverages voice to interact with Skylight on Glass – integrated with a Wi-Fi-enabled torque wrench – to properly tighten B-nuts on jet engines. In one pilot program, GE Aviation experienced 8% to 12% efficiency improvement.
Voice recognition technology can quickly initiate a call to an expert who can provide guidance directly through AR devices. This resolves issues quicker and helps new and less-experienced workers rapidly get up to speed – closing the skills gap a little tighter.
While voice recognition technology is already making tangible impacts, we haven’t tapped its full potential. Voice is still limited in terms of the number of words or phrases someone can say to generate an accurate response or outcome, and these dictations may not seem very natural. We are now moving toward context recognition – the technology needs to recognize what you are saying and the context of what you are asking. For example, a technician may ask the system to bring up a manual on smart glasses, but the system doesn’t necessarily know that the worker needs the manual for a Boeing 747, not a 767.
Improving accuracy comes down to context – what’s happening in the environment, and what’s the user’s intent? This will also require advances in sensor technology, which will help detect the user’s environment. With the right blend of NLP and sensors, additional use cases will emerge for AR-powered wearables.
As the aerospace industry turns to AR technology to close the talent gap and drive toward Industry 4.0, it must consider which methods of interaction will make AR deployments most impactful. Voice is currently the loudest interaction paradigm, especially for workers who need to operate swiftly while keeping their hands free. And, with more seamless interactions,