Boris Smus Projects About Archives

Responsive WebVR, Headset Optional

VR on the web threatens to cleave the web platform in twain, like mobile did before it. The solution then and the solution now is Responsive Web Design, which websites to scale well for all form factors. Similarly, for VR to succeed on the web, we need to figure out how to make VR experiences that work both in any VR headset, and also without a VR headset at all.

Various head mounted displays.

WebVR boilerplate is a new starting point for building responsive web VR experiences that work on popular VR headsets and degrace gracefully on other platforms. Check out a couple of demos, a simple one and one ported from MozVR.

Web Sensor API: Raw and Uncut

Sensors found in smartphones define the mobile experience. GPS and the magnetometer enable the fluid experience of maps; motion sensing enables activity recognition and games, and of course the camera and microphone allow whole categories of rich media applications. Beyond these now obvious examples, sensors can also enable clever inventions, such as Cycloramic, which used the vibrator motor in iPhones (4 and 5) to rotate the phone and take a panorama, pushup counters which use the proximity sensor to count repetitions, and Send Me To Heaven, which uses the accelerometer to determine flight time of a phone thrown vertically as high as possible. I've had some experience using and abusing sensors too, most recently for the Cardboard magnet button.

However, over the last couple of years, I've had to step away from the web as a development platform, in part because of the poor state of sensor APIs. In this post, I will describe some of the problems, take a look at sensor APIs on iOS and Android, and suggest a solution in the spirit of the extensible web manifesto.

UIST 2014 Highlights

This year's UIST was held in Waikiki, Honolulu, the undisputed tourist capital of Hawaii. I've stuck to my now three year old habit of taking notes and posting my favorite work. Since last year, the conference has grown an extra track. The split was generally OK for me, with my track mostly dedicated to user interface innovation (sensors, etc) and another more concerned with crowdsourcing, visualization, and more traditional UIs.

My overall feeling was that the research was mostly interesting from a tech perspective, but focused on solving the wrong problem. For example, at least 5 papers/posters/demos were focused on typing on smartwatches. The keynotes were very thought provoking, especially when juxtaposed with one another.

Cardboard: VR for Android (Google I/O 2014)

David Coz, Christian Plagemann and I had the honor of giving a talk at I/O a few weeks ago, introducing Cardboard, a simple way to turn your smartphone into a VR headset. Watch the video here.

I'm especially excited about two cardboard-related things. Firstly, the magnet-based button, which I worked on, was generally liked and deemed clever! I'm hoping to write more about the technical details of this in the future, but this techcrunch summary was music to my ears, thanks guys!

The magnet slides within its groove, then automatically slips back into a place because of another magnet on opposite side. Your phone is able to sense the magnet’s movement, allowing it to act as a ridiculously clever little button. Yeesh.

Secondly, we've released the VR toolkit on github for you to experiment with. The whole thing will be fully open sourced soon, but for the time being, a JAR file, javadoc, and a sample application are provided on the Cardboard developer site.

Spectrogram and Oscillator

A live-input spectrogram written using Polymer using the Web Audio API.

Screenshot of spectrogram

If you're running Chrome or Firefox, see it in action. Once the spectrogram is running, see if you can make a pattern with your speech or by whistling. You can also click anywhere on the page to turn on the oscillator. For a mind-blowing effect, load this in a parallel tab.

Addressable apps

It is human nature to create taxonomies for everything: people, places, and things. Without such a system of reference, we become lost and disoriented. Imagine your city with street names and addresses blanked out. Finding your favorite cafe, meeting up with your friend on the weekend, even locating your own parked car would become incredibly difficult. Travel outside your city would become far more challenging.

The web's defining property is addressability. URLs on the web are like street names and addresses in the physical world. This makes sharing and cross-linking easy. Non-web platforms are a little bit like our city with blanked out street names and addresses. There's no good way of talking about where you currently are, or how to get somewhere else. These platforms typically give users a crutch to help with the issue, such as a share button or dialog. But these create an inherently inferior experience, since addressability is no longer built-in. Addressability becomes a burden on the app developer, and as a result, the platform is no longer navigable.

In light of the success of Android and iOS, and given a potential explosion in new types of lower power computing (wearables, IoT, etc), it's unclear if browsers will be as ubiquitous as they are today (at least in the near term). I'm very interested in seeing if and how non-web platforms can embrace URLs. How closely coupled are URLs to HTML, and do they make sense without a presentation layer?

The Ebb of the Web

Tech pundits like to lament that the web has no viable future, while web idealists hold that in fact the web is totally fine, with a "too big to fail" sort of attitude.

At the root of this disagreement are poorly defined terms. The web can mean many different things to different people. Though it started from a pretty abstract notion of a series of interlinked documents, it has now evolved to refer to a very specific technology stack of hyperlinked HTML documents styled with CSS, enhanced with JavaScript, all served on top of HTTP. In light of an increasing movement away from desktop-style computing, we've seen a big shift away from the web in mobile platforms.

Let's take apart this gob of web technology in light of the increasingly complex landscape of computing and try to make sense of what the web is and where it's going.

A framework for webiness

Remote Controls for Web Media

When the world wide web was first conceived, it was as a collection of interlinked textual documents. Today's web is full of rich media. YouTube and other video sites alone consume an enormous 53% of all internet traffic. Web denizens often have an open audio player in one of their tabs. Web-based photo sharing services such as Flickr are the most common way of enjoying photos on our computers. The remote control, foundations of which are attributed to everyone's favorite inventor Nikola Tesla in patent US613809, has been the preferred way of controlling media for over half a century.

Yet the only way we can control all of this web media is via the on-screen user interfaces that the websites provide. The web has no remote control, and this is a big usability problem. Many use the desktop versions of streaming services like Spotify and Rdio rather than their web player, exclusively because of mac media key support. For scenarios where you're far from the screen, like showing friends a slideshow of photos on a TV, the lack of remote controllability is a non-starter.

This post is a concrete proposal for what a remote controls for the web should be like. To get a sense for how it might feel, try a rough prototype.

Older posts →