Author(s): Geoff Blaber
AR has been promising to go mainstream for years, but as soon as an app or concept launches, attention fades only to be inflated again by the next big thing. Pokemon Go may still be the best mass-market example of this, but AR is more eye candy than an experience-defining technology. Google's Tango platform is unquestionably powerful, but with only two compatible devices available, it has a long way to go.
The recent Augmented World Expo (AWE) 2017 in Santa Clara saw participation double from 2016 to over 5,000 attendees, and it underlined the potential of the full spectrum from AR to VR (which I'll refer to here as extended reality). Although the event revealed countless uses for the technology and reassuring signs of progress, it also illustrated how much still needs to be achieved before AR builds scale beyond industry verticals.
Apple's announcement of ARKit is what AR badly needs. With a sizeable addressable market consisting of iPhones and iPads with an A9 or A10 chip — this includes the iPhone 6, 6S and newer variants — the platform offers developers immediate scale and an incentive to invest.
However, this is just one step of the journey. AR has clear scope to evolve into form factors such as a heads-up display and, ultimately, a head-worn device. ODG is arguably closest to delivering this vision, but, like Google Glass, it faces the enormous hurdle of gaining consumer acceptance.
Nonetheless, this is where the real potential of the technology lies. Although AR and VR are widely considered to serve different uses, I believe they will ultimately merge. In this scenario, a single head-worn device would be able to seamlessly switch between an opaque screen for VR, to a transparent one for AR applications. Such a device could become a converged solution that complements and potentially even replaces smartphones, depending on the context.
This is a grand vision for the next decade, but there are some considerable technical challenges to overcome. Qualcomm's vice president of product management, Tim Leland, highlighted some of them, calling for wider industry collaboration during a keynote session at AWE 2017.
Display technology and software need to advance significantly. The field of view must increase, to offer at least 190 degrees horizontally and 130 degrees vertically. The industry also needs to solve the problem of fatigue caused by disparity between the surface of a screen and the focal point of objects in VR. A possible solution is to offer simultaneous transmission of images at multiple focal planes to the user's eyes. Perhaps the most challenging aspect is enabling displays that can switch between mainly transparent operation and opaque mode for VR. Brightness and refresh rate will also need to improve by orders of magnitude. Any discussion of 4K displays on mobile devices may seem like overkill today, but place such a screen centimetres from your eyes and the limitations of inferior resolutions become apparent.
There's also the challenge of realism. Realistic application of light, shade and reflection is what makes objects look natural rather than fake. This is difficult to achieve as the final colour of every pixel must be defined and then updated in real time based on movement and light source. This needs cross-industry collaboration to create new APIs for interaction between cameras, sensors, graphics rendering engines and more.
Motion tracking is perhaps the area making the most progress, providing six degrees of freedom and inside-out tracking that removes the need for external sensors. A critical development is motion-to-photon latency — the main cause of nausea — of less than 10 milliseconds. This is the time it takes for a virtual scene to correctly change to account for the user's head movement. Eye-tracking features will be crucial for foveated rendering, depth of field and increased accuracy in targeting and interaction.
The final ingredient for success is connectivity. The vision of extended reality can only be realized with the high throughput and low latency that 5G promises. This is difficult to imagine today, but there are exciting developments emerging, such as the ability to stream interactive video content in 8K HDR at 120 frames per second directly to extended reality headsets. This could transform the way people watch sports, movies and other video content, but it puts considerable strain on the network. For companies focussed on delivering extended reality, it's not about developing uses for 5G, but ensuring networks are ready with sufficient capacity and consistency of experience even at the cell edge.
All this must be delivered in a package with high power and thermal efficiency. Microprocessors based on architectures designed for PCs are too power-hungry for extended reality devices. Even the level of heat that smartphones create today won't be acceptable in a glass-based form factor that attaches to a person's head. Battery capacity will also have to improve considerably to become adequate for a head-worn device.
This is a long, but by no means exhaustive list of requirements for extended reality. Products such as ODG's R8 smart glasses show how far the industry has come. However, this is just the first of many leaps needed over the next decade and beyond to deliver extended reality devices and experiences that could eventually create a multibillion-dollar market. The last 10 years of mobile technology proved the value of partnerships and ecosystems. The next decade will need an unprecedented level of cooperation, but the impact could be no less transformative.
A version of this article was first published by FierceWireless on 15 June 2017 and can be accessed here.