Tech pundits like to lament that the web has no viable
future, while web idealists hold that in fact the web is
totally fine, with a "too big to fail" sort of attitude.
At the root of this disagreement are poorly defined terms. The web can
mean many different things to different people. Though it started from a
pretty abstract notion of a series of interlinked documents, it has now
evolved to refer to a very specific technology stack of hyperlinked HTML
of HTTP. In light of an increasing movement away from desktop-style
computing, we've seen a big shift away from the web in mobile platforms.
Let's take apart this gob of web technology in light of the increasingly
complex landscape of computing and try to make sense of what the web is
and where it's going.
When the world wide web was first conceived, it was as a collection of
interlinked textual documents. Today's web is full of rich media.
YouTube and other video sites alone consume an enormous 53% of all
internet traffic. Web denizens often have an open audio player in one of
their tabs. Web-based photo sharing services such as Flickr are the most
common way of enjoying photos on our computers. The remote control,
foundations of which are attributed to everyone's favorite inventor
Nikola Tesla in patent US613809, has been the preferred way of
controlling media for over half a century.
Yet the only way we can control all of this web media is via the
on-screen user interfaces that the websites provide. The web has no
remote control, and this is a big usability problem. Many use the
desktop versions of streaming services like Spotify and Rdio rather than
their web player, exclusively because of mac media key support. For
scenarios where you're far from the screen, like showing friends a
slideshow of photos on a TV, the lack of remote controllability is a
This post is a concrete proposal for what a remote controls for the web
should be like. To get a sense for how it might feel, try a rough
I just got back from Scotland, where I had the pleasure of attending
UIST 2013 in St. Andrews. This was my second time attending, and again
it was incredibly engaging and interesting content. I was impressed
enough to take notes just like my last UIST in 2011. What
follows are my favorite talks with demo videos. I grouped them into
topics of interest: gestural interfaces, tangibles and GUIs.
By amping up the softest instances of a song to make them closer in volume to the loudest instances, you can create the perception of loudness. This process is called dynamic range compression and is routinely done (and over-done) by recording engineers to create ever-louder music, colloquially referred to as "The Loudness War".
At first glance, this may sound like a theory devised by a crotchety old man in a rocking chair yelling at his loud teenaged neighbors to "turn the goddamned music down!" but The Echo Nest just proved that it's actually a thing.
About a year ago, I wrote an overview of many of the different responsive
image approaches in an HTML5Rocks article, all of which try to solve the
Serve the optimal image to the device.
Sounds simple, but the devil's in the details. For the purposes of this here
discussion, I will focus on optimal image size and fidelity, and much to your
chagrin, will completely ignore the art direction component of the problem.
Even for tackling screen density, a lot of the solutions out there involve a
lot of extra work for web developers. I'll go into two solutions (client and
server side) on the horizon that serve the right density images. In both cases,
all you need to do is:
Jake used to get easily distracted by his smartphone:
Maybe you can handle that temptation. Maybe you've got willpower. That's great for you, but for me, willpower alone didn't cut it. [...]
Checking email, checking Twitter, checking news. Wondering if something interesting was happening anywhere in the world. Wondering if anybody was thinking about me.
Jake fixed his problem by doing three simple things:
The phone in your pocket is an amazing, fluid, multi-functional tool.
When it comes to talking to other devices, such as your TV or laptop,
the user experience drops off sharply. Bill Buxton speaks
eloquently on the subject, describing three stages of
high tech evolution:
Device works: feature completeness and stability
Device flows: good user experience
Many devices work together
But connecting devices is a pain and we have been squarely at stage 2
since the release of the iPhone. There are many competing approaches to
do this: Bluetooth, Bluetooth LE, WiFi direct, discovery over the same
local WiFi network, and many many others. This post is dedicated to
attacking this problem from an unexpected angle: using ultrasound to
broadcast and receive data between nearby devices. Best of all, the
approach uses the Web Audio API, making it viable for pure web
Another great performance from Bret Victor (circa 1973). Firstly, this
is probably the most succinct but complete argument for why Java should
not be the primary focus for undergrad Computer Science programs.
About two years ago, Alan Kay said that "the best way to predict the
future is to invent it". This talk is a great reminder that even
inventing a possible future, you're likely to be way off.
As usual, I want two conflicting things. Firstly, I want to own the
content I write, and control how it is authored. My weapon of choice is
MacVim and Lightning, a static blog engine I wrote to
address my very specific requirements:
Secondly, I want people to read the things I write and follow the
stories that I link to, since it feels good, and sometimes generates
interesting discussions. I wrote a Mac GUI that automates link blogging
and POSSE style cross-posting to social networks.