Immersive modes, such as Augmented Reality and Mixed Reality headsets, have the power to revolutionize how we work, play, teach, learn and shop. Enterprise already offers solutions for specific AR tasks in engineering, manufacturing, design, health care, architecture, retail and gaming; return on investment is mainly cost avoidance (shorter learning cycles, less errors, better communication, productivity and yields, etc.).
However, most actors involved in developing the AR ecosystem (from hardware to app development platform to apps and content) agree that it will take a long time for hardware to hit the consumer level comfort required for mass adoption (5 to 10 years).
Some of the hardware issues to solve, specifically from an optical engineering point of view, are:
• Higher FOV and higher resolution through active foveation
• Vergence Accommodation Conflict (VAC) mitigation through varifocal, multifocal, light field or true holographic display
• Pixel occlusion for HDR for more “realistic” holograms
• Higher brightness over a decent eye box for external usage (lower power, higher brightness / contrast displays and high efficient optics)
• More accurate, less power, more compact IR and visible sensors (sensor hardware fusion: Head tracking, eye tracking, gesture tracking, 3D scanning, multispectral)
There are many other challenges for the ultimate consumer AR experience (such as overall CG, size and weight, battery life, head dissipation, 5G connectivity for cloud rendering, etc…) which we will not discuss today.
If you would like more information outside of this AMA, I will be at SPIE Photonics Europe in Strasbourg, France next month for the Digital Optics for Immersive Displays conference. You can also take my free course “An Introduction to VR, AR, MR and Smart Eyewear: Market Expectations, Hardware Requirements and Investment Patterns” on the SPIE Digital Library. It was recorded live at SPIE Photonics West in January. Enjoy!
Like a significant portion of the population, I suffer from simulator sickness, which is a significant issue for widespread adoption of Mixed Reality and especially Virtual Reality. What advice do you have for simulator sickness sufferers?
VR nausea is a very common problem, especially for VR headsets, not so much for AR headsets.
In your mind, where does AR most fall short? And what significant technical barrier remains to improving that experience/technology?
There are many barriers to mass adoption of AR (and MR) today: Basically, they can be split into two: content and hardware. I will mainly speak about Hardware here.
When can we move beyond fresnel?
Fresnel is an ambiguous word: "refractive" Fresnel and "diffractive" Fresnel
What optics courses are most relevant to students seeking jobs in this field? When you're reviewing applications to join your team, are you mostly focused on technical qualifications or are there soft skills that rank higher?
Optics courses:Of course traditional optics classes on theory of aberrations are needed, but when supplemented by nanophotonics classes and mechanical engineering classes (opto-mechanics), these could be very interesting for today's needs in AR/VR industry.
I saw a headline about direct image painting on the retina with a tiny laser. Is that anywhere in the research path or a far off scifi idea?
Retinal imaging is an old concept first introduced by the army for many reasons.
Which of the problems that you outlined would you say are easiest to solve and which most difficult?
The problems outlined are many, but the main optical problems are as follows: 1) Size and weight 2) CG 3) Efficiency of combiner optics 4) Eyebox size 5) Optical foveation 6) Large FOV 7) VAC mitigation (Vergence Accomodation conflict) 8) Pixel occlusion etc...
Is there an in-development / upcoming consumer hardware setup you find particularly interesting for some reason? I'm assuming this would likely be a more revolutionary product or one incorporating a new technology capability vs. simply a new product only offering an incremental improvement (but perhaps I'm wrong?).
There are many amazing developments out there today which are very exciting: both on the basic technology block and on the overall architecture.
I always think about adaptive technologies largely because a lot of work I do is for accessibility, mainly web content. The concept of VR/AR intrigues me though as a way to bypass possible hindrances to content or experiences but I know that just getting hardware and interfaces developed for a typical sighted or hearing experience is a current focus. Is expanded accessibility a topic you explore or consider during development or plan to at some point?
This is a very important aspect for next generation AR and MR systems.
Here's a more pop culture-focused question, especially seeing as a much-hyped movie adaptation is coming out in under 2 weeks:
have you happened to read, or are you otherwise familiar with, the (more near-ish term) Sci-Fi book Ready Play One? In it, there are some really interesting (if technically shallow) descriptions of VR (and a little bit of AR) hardware and how environments might make use of it to encourage large numbers of people to interact in a world with much more computing power.
Do you have any observations or thoughts about the depictions of the way VR / AR hardware is likely to advance, requisite computing / simulated environments (architectures, challenges) and the way they're designed and managed, or related social changes or issues depicted by the book? Anything you felt the author just missed the ball on?
I remember especially the movies "LAWNMOWER" and "MINORITY REPORT" which made the public dream about VR and AR.
- t3_85s4nr_comments.json 164 KB
This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.