When developing Augmented Reality applications for Smart Glasses, some customers are confused about the options regarding pinning information to an object or point in space Vs heads-up display. This article explains our thinking on the matter.
Pinned to a point in space display
Pining an image to a point in space or an object means allowing virtual graphics or information to be located at a point in space or linked object in the real world. This could mean that relevant information sits above a real-world object (such as an assets name/info sitting above it in space) or a user could pin different information streams (work procedures, remote adviser call etc) in virtual space around them when they are doing a task. The technology requires the glasses to recognise the real-world items the graphics are being pinned too, the movement of the glasses and also their relative position to virtual items so when the wearer moves/turns their head the relative location of the virtual item will stay fixed to the real-world object or at the pinned point in space, not the wearers head (as is the case in heads-up display).
This allows the wearer to view the data they require in proximity to the relevant asset they are working on and works well when there are many assets in view, each with relevant information to be displayed. It also allows the wearer to have much more information pinned around them and so increase the amount of information they are consuming. Finally, when combined with 3D graphics it allows truly augmented experiences where a glasses wearer can see real looking graphics overlaid on a real-world object. This can be particularly useful for training.
On the other hand, this option can be slower to roll-out at scale, is more immersive (which in many applications is a bad thing – see my articles on this topic) and requires much more sophisticated glasses. Incorrect execution can lead to “jumpy” graphics as the glasses struggle to keep the various data streams together which can be off-putting.
Heads Up Display
“Heads up display” refers a display system whereby an image or information is located in a fixed position relative to the wearer's vision. This might be directly in the centre of their vision (for critical information such as instructions they need to read or an image they need to look at), or on the edge of their vision (for peripheral information such as a live feed from a meter etc which they need to be aware of, but not focus on). It means if the glasses/headset wearer turns their head, the text/image/video will move with them and still be located in the same part of their vision at all times.
This has the advantage of being easiest to see and control as the centre of our vision is where we have the best focus. Many believe this option is less distracting as the wearer already needs to be aware of their surroundings and this solution which puts the augmented information they require front and centre allows the wearer to focus. It is also clear to the wearer that the information is not part of the real world and so reduces the risk of confusion relating to immersion. Finally, it means that the relevant information is always in the vision of the wearer, rather than moving outside their vision as is the case with a pinned location display.
The drawbacks are that it is less immersive and so not suitable for many training applications. It also limits the quantity of information that can be available to a wearer. Finally, it does create a risk whereby the wearer could view the incorrect information relevant to the asset they are looking for as the information doesn’t move with the asset.
We receive many queries from organisations who would like to work with us on their ideas for Augmented Reality (AR) apps. Some of these applications are more suitable for Virtual Reality (VR) however, so we thought it would make sense to set out our thoughts on this as a post here.
Immersive Vs Non-Immersive Vs Minimal Immersion Experiences
In many cases, people hear AR but think VR. VR is an immersive technology whereby the headset wearer feels they are in a different place/world. In a business context, this can be very useful for training as it allows an employee to be trained up in a scenario, situation or on a piece of equipment which is hard to access in the real world.
We believe however that AR works best when a non-immersive or minimally immersive experience is required. When providing an operator with a piece of information to assist them to get their job done, we design on the theory that the operator should be present in the real world and so a non-immersive experience is best. What we mean is that if we are presenting the operator with data, text, images or video to assist them to do their job, they should easily be able to tell that the injected information is not a real part of their surroundings, and so they won’t mistakenly let it confuse them about what is real and what is not.
In some cases where training on a piece of equipment is required, AR can be used to give the trainee the impression that the state of the equipment is not as it actually is (image giving them the view that the filter is dirty, or the warning light is red, when it is not etc). We call this minimal immersion, as the trainee only sees a small part of their surroundings in an altered state. In general, they remain fully aware of their surroundings.
So to sum up, our approach in UtilityAR, when deciding between AR and VR, ask the question “How much immersion are you looking for in this application?” If the answer is that it should be an immersive experience, then go with VR. If you want the user to remain grounded in the location they are actually in, then AR is the solution for you.