Sensor fusion and motion prediction
A major technical challenge for VR is to make head tracking as good as possible. The metric that matters is called motion-to-photon latency. For mobile VR purposes, this is the time that it takes for a user's head rotation to be fully reflected in the rendered content.
The simplest way to get up-and-running with head tracking on the web today is
to use the
deviceorientation events, which are generally well supported across
most browsers. However, this approach suffers from several drawbacks which can
be remedied by implementing our own sensor fusion. We can do even better by
predicting head orientation from the gyroscope.
I'll dig into these techniques and their open web implementations. Everything discussed in this post is implemented and available open source as part of the WebVR Polyfill project. If you want to skip ahead, check out the latest head tracker in action, and play around with this motion sensor visualizer.