Augmented Reality Research
Introduction
In this research project we investigate the effectiveness of browser-based Augmented Reality for placing interactive life-size objects in an AR scene that can be viewed through a mobile device's built in browser.
AR is a hot topic in mobile, with both major OS's (iOS and Android) having AR built in and undergoing rapid development.
The iOS and Android AR libraries (AR Kit and AR Core, respectively) are intended to be used within apps. However, the need to download an app in order access an AR experience is a source of friction for casual, one-off uses - users won't download a specific app just to view and interact with an AR object a brands wants to display.
If, on the other hand, all that is required is a relatively recent model mobile device with a camera, then this friction is eliminated - the user simply goes to the brand's website and, after they have granted permission to use the camera, the AR object is displayed in-browser.
The javascript library, AR.js, a library with a large active developer community, provides a cross-browser augmented reality experience, exactly what we need.
Technological uncertainties
However, there are some limitations:
- Limited access to built-in hardware: due to the library having to work cross platform, and with a wide variety of devices, it does not rely on relatively advanced hardware features: depth sensors and full access to built-in motion sensors.
- Cannot detect surfaces: as surface detection relies on depth sensors, the in-browser AR can only use markers (image or barcode) or GPS to place objects within an AR scene. Basically they identify a position within a scene, based on GPS or detection of a marker, and superimpose an object on the video feed from the camera at that position.
- Slow on older devices: the processing power required to superimpose relatively complex 3d objects on a video feed in realtime as the mobile device moves is quite large, and leads to flickering and instability on older devices
- Poor documentation - as a relatively new Javascript library, AR.js' documentation is sparse.
- Reliance on the stability of other Javascript libraries - AR.js builds on Three.js and A-frame.js; mature libraries under active development. However, this comes with some moving dependencies that can make progress inconsistent
- Spotty support for Natural Feature Tracking - using features in the environment as anchors for AR objects - rather than the standard black and white markers - is possible, but very difficult to achieve with stable, consistent results. A worked example demonstrates our continuing research
- Detecting hotspots - interaction with objects is surprisingly difficult to achieve, and was our first area of research
- Persistence of AR objects - consistent placement of an object in an AR scene when markers go out of view. This is perhaps the most difficult problem to overcome, and our research is ongoing
Prerequisites
The following examples demonstrate the key aspects of in-browser AR.
Note: please make sure you are on the https version of this page before proceeding. Ensure that you have given permission for camera and motion/orientation access.
- Print this Hiro Marker
- Marker based AR - load this on a device and point at the marker you just printed
- Print this Image
- Image based AR - load this on a device and point at the image you just printed
- Simple animation once a Hiro marker has been found, an animation is displayed
- Recognising touch gestures - once a Hiro marker has been found, interact with the model using touch gestures (note, this is not hotspot detection)