Boris Smus

interaction engineering

Copresence in WebVR

The web platform is uniquely great for networked copresence. To demonstrate, I built a multi-user chat prototype that uses peer-to-peer audio and data connections to establish a virtual audio experience. Voices are spatialized based on the position and orientation of each participant (using Web Audio). Also, you can shrink and grow, which, in addition to changing your avatar's size, pitch shifts your voice. Large avatars have deep, god-like voices, while smaller ones start to sound very mousey!

Check out the demo for yourself. It works on desktop (mouse look and spacebar triggers movement), on mobile (magic window) and in VR (through the WebVR API, via the polyfill).

Continued →

Inspirata: for what inspires you

My site has a little section called Clippings. It's meant as a visual record of some of the things I've found inspiring on the web. How do I add new items to this visual record? Well, I'm glad you asked!

About a year ago, I cobbled together a Chrome extension for exactly this purpose: screen grabs from any webpage. Releasing it on the webstore has been on my backburner ever since. Over the last few weeks, I've spent a bit of time improving it and today, I'm ready to release it for broader testing. I call it Inspirata. Inspirata can be downloaded from the Web Store, and it works like this:

  1. Click the Inspirata icon button.
  2. Select part of the page to save.
  3. Enter an optional caption, et voila!

How Inspirata works video

Continued →

Browsing Wikipedia in VR

WebVR provides a solid technical foundation on which to build compelling VR experiences. But it does not answer a critical question, which is the topic of this post:

What could the web become in a Virtual Reality environment?

Gear VR provides a simple and straightforward answer: same same. The fundamental unit is still a page, but you use the immersion of VR to increase your effective screen size. The input constraints result in a worse experience for the user. Scrolling with your finger on your temple is tiring and head-based typing is a massive pain. Given the input constraints, we need to beef up the output and make it better matched to what VR excels at. A responsive design inspired solution would involve deconstructing the page to better suit the nature of the immersive environment.

Another approach is to make a clean break from legacy web content. What if certain web pages had parallel content tailored for virtual reality? In this post, I'll explore this idea with an example focused on Wikipedia.

Video of VR Wikipedia

Continued →

Three approaches to VR lens distortion

Immersion requires a large field of view. This could be achieved by putting a large curved spherical display on your face, but alas such technology is prohibitively expensive. A more affordable solution to increasing the field of view is to look at small ubiquitous rectangular displays through lenses:

Why VR needs lenses

Lenses placed close to your eyes greatly increase your field of view, but there is a cost: the image becomes spherically distorted. The larger the field of view, the more distorted the image. This post is a quick summary of three different approaches to undistorting the image, all of which have been implemented in JavaScript for various WebVR-related projects.

Continued →

Embedding VR content on the web

During a two week trip to India, I took over 1000 shots, including photos, videos and a few photospheres. A picture is worth one thousand words, but how many pictures is a photosphere worth? We may never know, but I digress. My favorite photosphere was of friends posing inside one of the turrets of the Jaigarh Fort:

I captured this using the photosphere camera which ships with Android. It's embedded into my blog using VR View, which launched today. The embed above lets you include an interactive photosphere right in your website, which is especially fun on mobile, where the image reacts directly to your phone's movements. You can view it in full screen mode, and even in Cardboard mode (only on mobile).

But you know what's cooler than a photosphere? A stereo photosphere! And luckily, you can capture stereo photospheres using Cardboard Camera, and then use a VR View to embed them too. You can even embed mono or stereo videos. Check out the docs for more info. Eager to hear what you think!

Simulating wealth inequality

Economic inequality is rising in the US. A viral video from several years ago made this abundantly clear:

Wealth inequality in U.S.

The gap between desire, expectation and reality is truly shocking, and inspired me to learn more. In particular, whether or not inequality is actually a big problem, and then to better understand issues that the video above did not address:

  1. How did the US become so economically unequal?
  2. How can this inequality be reduced?

My answers come in the form of simple simulations. For example, the following simulation has two agents with different salaries, but the same spending habits. You can play with it yourself!

In the first part of this post, I try to provide some background on economic inequality: how to measure it, various forms of it, and whether or not it's a problem. In the last part, I try to explain how we got to the status quo, and how inequality can potentially be reduced. Rather than just making claims, I use simulations like the one above to defend my claims. This way, you can see more clearly where I'm coming from, and if you disagree, you can make your own simulation with better assumptions.

Continued →

Sensor fusion and motion prediction

A major technical challenge for VR is to make head tracking as good as possible. The metric that matters is called motion-to-photon latency. For mobile VR purposes, this is the time that it takes for a user's head rotation to be fully reflected in the rendered content.

Motion to photon pipeline

The simplest way to get up-and-running with head tracking on the web today is to use the deviceorientation events, which are generally well supported across most browsers. However, this approach suffers from several drawbacks which can be remedied by implementing our own sensor fusion. We can do even better by predicting head orientation from the gyroscope.

I'll dig into these techniques and their open web implementations. Everything discussed in this post is implemented and available open source as part of the WebVR Polyfill project. If you want to skip ahead, check out the latest head tracker in action, and play around with this motion sensor visualizer.

Continued →

Hot bread: delicious or deadly?

Despite free access to information via the Internet and an increasingly global world, people still seem to have all sorts of divergent ideas about how the world works. For example, did you know that eating hot bread and pastries is incredibly unhealthy? Indeed, it can often even lead to complete bowel obstruction! I learned this fact as a kid, while growing up in the Soviet Union. Understandably, I have been very careful to avoid eating hot baked goods. That is, until recently, when my American girlfriend questioned the validity of my belief and I began to harbor some doubts. I decided to check if it was actually true, and asked Google. The results were very clear: I had fallen prey to an old wives tale. My worldview, shattered.

Incredulous, I searched for the same thing in Russian and arrived at the opposite conclusion. "What's up with that?" I thought, and wrote this post.

Continued →

UbiComp and ISWC 2015

I recently returned from ISWC 2015, where I presented the Cardboard Magnet paper. In addition to seeing old friends, meeting new ones, and being inspired by some interesting research, it was an excellent excuse to visit Osaka, Japan! This year, ISWC was co-located with UbiComp, and the combined conference had four tracks. This post is by no means exhaustive, just some of the more interesting work I got a chance to see.

Continued →

Magnetic Input for Mobile VR

It's easy to do, just follow these steps:

  1. Cut two holes in a box
  2. Put your phone in that box
  3. Look inside the box

And that's the way you do it.

Your smartphone is now in a box, so how do you do input? Now that we have a paper accepted to ISWC 2015, I can tell you!

Continued →