Dan Tepfer on the interesting relationship between rhythm and pitch, at first glance completely distinct musical phenomena. The fascinating thing is that the two are actually quite related. Fundamentally, it's a question of frequency. Slow repetition (low frequency) is perceived as rhythm, while fast repetition (high frequency) is perceived as pitch, by apparently distinct subsystems. Quoth Dan:
For slow things, our consciousness distinguishes individual events and interprets them as what we call “rhythm”, such as the child tapping on her knees. For very fast things (like when the taps get very close together), our consciousness isn’t fast enough to distinguish the individual events, and our pitch hearing kicks in, guided by those tiny hairs in our cochlea, each one specialized in resonating at a certain pitch.
David Coz, Christian Plagemann and I had the
honor of giving a talk at I/O a few weeks ago, introducing Cardboard, a
simple way to turn your smartphone into a VR headset. Watch the video
I'm especially excited about two cardboard-related things. Firstly, the
magnet-based button, which I worked on, was generally liked and deemed
clever! I'm hoping to write more about the technical details of this in
the future, but this techcrunch summary was music to my ears,
The magnet slides within its groove, then automatically slips back
into a place because of another magnet on opposite side. Your phone is
able to sense the magnet’s movement, allowing it to act as a
ridiculously clever little button. Yeesh.
Secondly, we've released the VR toolkit on github for you to
experiment with. The whole thing will be fully open sourced soon, but
for the time being, a JAR file, javadoc, and a sample application are
provided on the Cardboard developer site.
If you're running Chrome or Firefox, see it in action. Once the
spectrogram is running, see if you can make a pattern with your speech
or by whistling. You can also click anywhere on the page to turn on the
oscillator. For a mind-blowing effect, load this in a parallel
It is human nature to create taxonomies for everything: people, places,
and things. Without such a system of reference, we become lost and
disoriented. Imagine your city with street names and addresses blanked
out. Finding your favorite cafe, meeting up with your friend on the
weekend, even locating your own parked car would become incredibly
difficult. Travel outside your city would become far more
The web's defining property is addressability. URLs on the web are like
street names and addresses in the physical world. This makes sharing
and cross-linking easy. Non-web platforms are a little bit like our
city with blanked out street names and addresses. There's no good
way of talking about where you currently are, or how to get somewhere
else. These platforms typically give users a crutch to help with the
issue, such as a share button or dialog. But these create an
inherently inferior experience, since addressability is no longer
built-in. Addressability becomes a burden on the app developer, and
as a result, the platform is no longer navigable.
In light of the success of Android and iOS, and given a potential
explosion in new types of lower power computing (wearables, IoT, etc),
it's unclear if browsers will be as ubiquitous as they are
today (at least in the near term). I'm very interested in seeing if and
how non-web platforms can embrace URLs. How closely coupled are URLs to
HTML, and do they make sense without a presentation layer?
Tech pundits like to lament that the web has no viable
future, while web idealists hold that in fact the web is
totally fine, with a "too big to fail" sort of attitude.
At the root of this disagreement are poorly defined terms. The web can
mean many different things to different people. Though it started from a
pretty abstract notion of a series of interlinked documents, it has now
evolved to refer to a very specific technology stack of hyperlinked HTML
of HTTP. In light of an increasing movement away from desktop-style
computing, we've seen a big shift away from the web in mobile platforms.
Let's take apart this gob of web technology in light of the increasingly
complex landscape of computing and try to make sense of what the web is
and where it's going.
When the world wide web was first conceived, it was as a collection of
interlinked textual documents. Today's web is full of rich media.
YouTube and other video sites alone consume an enormous 53% of all
internet traffic. Web denizens often have an open audio player in one of
their tabs. Web-based photo sharing services such as Flickr are the most
common way of enjoying photos on our computers. The remote control,
foundations of which are attributed to everyone's favorite inventor
Nikola Tesla in patent US613809, has been the preferred way of
controlling media for over half a century.
Yet the only way we can control all of this web media is via the
on-screen user interfaces that the websites provide. The web has no
remote control, and this is a big usability problem. Many use the
desktop versions of streaming services like Spotify and Rdio rather than
their web player, exclusively because of mac media key support. For
scenarios where you're far from the screen, like showing friends a
slideshow of photos on a TV, the lack of remote controllability is a
This post is a concrete proposal for what a remote controls for the web
should be like. To get a sense for how it might feel, try a rough
I just got back from Scotland, where I had the pleasure of attending
UIST 2013 in St. Andrews. This was my second time attending, and again
it was incredibly engaging and interesting content. I was impressed
enough to take notes just like my last UIST in 2011. What
follows are my favorite talks with demo videos. I grouped them into
topics of interest: gestural interfaces, tangibles and GUIs.
By amping up the softest instances of a song to make them closer in volume to the loudest instances, you can create the perception of loudness. This process is called dynamic range compression and is routinely done (and over-done) by recording engineers to create ever-louder music, colloquially referred to as "The Loudness War".
At first glance, this may sound like a theory devised by a crotchety old man in a rocking chair yelling at his loud teenaged neighbors to "turn the goddamned music down!" but The Echo Nest just proved that it's actually a thing.
About a year ago, I wrote an overview of many of the different responsive
image approaches in an HTML5Rocks article, all of which try to solve the
Serve the optimal image to the device.
Sounds simple, but the devil's in the details. For the purposes of this here
discussion, I will focus on optimal image size and fidelity, and much to your
chagrin, will completely ignore the art direction component of the problem.
Even for tackling screen density, a lot of the solutions out there involve a
lot of extra work for web developers. I'll go into two solutions (client and
server side) on the horizon that serve the right density images. In both cases,
all you need to do is: