As the dust is settling after AWE2014, I finally have some time to survey the scene and reflect on what we can really take away from this year’s conference – the biggest and best attended event for augmented reality (AR).

This will be a two part series looking at two of the most exciting technologies showcased at this year’s conference: RGBD cameras and Wearable displays.  What makes these technologies exciting is just how readily available they are becoming and how they are driving many of the trends we are seeing in the AR market as a whole. RGBD cameras and wearables are especially exciting for NGRAIN and the industrial AR and field based support areas.

RGBD Cameras

Also known as 3D cameras or Depth cameras, these are devices that not only provide you with a colour value per pixel (Red, Green, Blue) but also a distance from the camera (Depth).   The Xbox Kinect is likely the most well known such device.  We saw the Creative Senz3D driving many of the Intel RealSense demos or attached to an Oculus Rift for gesture recognition in VR in SoftKinetic’s booth.  We also saw the Occipital Structure Sensor, an RGBD camera for your mobile device, and Meta’s offerings adding one directly onto wearable glasses.  RGBD cameras definitely had a strong presence at the show, but why is this exciting for industrial AR applications?


Tracking is how the device running the AR application knows where it is in the physical world, or at least relative to the object you are trying to augment.

Early AR applications used GPS to determine the device location, which is only satisfactory for outside use and only where no real accuracy is needed.  On the other end of the spectrum, early AR applications attached markers to scene.  Then, using the standard video camera present in most devices and some computer vision algorithms, the system could determine where it is in the world relative to the markers.  This allows for much greater accuracy and allows users to be indoors, but now they have to attach all of these markers to their scene and calibrate the system before they can run the application.

Recent advancements have come from the world of robotics in the form of the Simultaneous Location And Mapping (SLAM) algorithm. As the name suggests, it uses natural features in the scene to track its location but also adds new points as they are discovered.  This is how Mars rovers navigate through an unknown Martian landscape, or Google’s self driving car can take you to the drive thru.  Since it uses natural features in the scene, you can track without adding markers to your equipment.  This is really cool technology, but still suffers from some stability and accuracy issues that mean it isn’t ready for wide deployment for industrial AR experiences on current generation devices.  The added depth information from RGBD cameras will greatly improve SLAM’s tracking stability and accuracy.

This technology is very exciting for us in the field based performance space. Overlaying a 3D model of industrial equipment on top of the physical piece of equipment requires a very high level of accuracy and stability that has been very hard to achieve with the current technology. Now with RGBD cameras becoming readily available, we hope to see the new levels fidelity and stability that will support general use and wide spread deployment.

Content Creation

A recurring concern in industrial AR development is access to the CAD data or an accurate 3D model of the equipment to be used for an AR application.  CAD data often contains proprietary IP so OEMs are often unwilling to share the source models.  Hiring modellers with the necessary skills to create an accurate model can be prohibitively expensive, both in terms of time and money.

SLAM’s other interesting benefit is that, under the hood, it essentially creates an internal 3D model of the world it’s looking at (Mapping).  We can leverage this to quickly generate 3D models of objects.

A few applications have attempted to use a standard video camera to quickly generate a rough 3D model – a process awfully similar to SLAM. RGBD cameras bring the same accuracy and stability improvements, ultimately creating more accurate models.


When overlaying the outline of parts of the equipment as we do in our AR Player application, textural detail is not important.  The only thing that is of any real importance is that size and shape of the 3D model is the same as the real part.  This is exactly what you should get from a 3D model created from scanning the actual part.  All for a fraction of the cost it would take to model it in a classical fashion!

Real-Virtual Object Occlusion

Another benefit with the depth information provided by RGBD cameras is that we can now have real world objects occlude virtual objects.  Meaning the virtual 3D model can be partially hidden by an object in the real world.

Although this may not have a great benefit in the short term, I personally think it is really interesting effect and will make AR applications far more engaging across the board.


With many AR applications moving towards hands free devices like glasses, head-mounted displays (HMD), and (my personal favorite) projection systems, an interesting problem arises.  If I’m holding a touch screen, or am in front of a keyboard, it obvious how I interact with the system.  But what do I interact with when I’m using a hands free display?  How do I start an operation or advance to the next step in a procedure?

One option is to use gestures like pointing, swiping, thumbs up, and opening or closing your hand as inputs to the system.  Again, this has been attempted with some success using standard video cameras, but it is often difficult for the system to find your hand from the rest of the background in the frame.  With the added depth information, the system knows what’s in the foreground, and what is in the background, making it far easier for it to track hand gestures.

I’ll go into more detail on gestures in my next post in this series on wearable displays.

As you can see, RGBD cameras are poised to have a huge impact on the AR world at large.  This is especially true for the industrial AR and field based support areas where accuracy and fidelity are paramount.  At NGRAIN, we’re currently prototyping and hacking away on these types of technologies and looking for new ways to improve our product offerings.  These are exciting times indeed.