Welcome, Friends

to a weblog of nonsense and occasional insight

Random access

I’m ashamed to admit that at the age of 24, I still—still—have to mentally scan through parts of the alphabet to find which letter precedes some other letter. I’ve known the English alphabet by heart since I was very small, but if you ask me, say, which letter comes before L, my thought process goes something like this:

—before L, alright… H–I–J–K–L— K, it’s K.

I have to scan from H through the letter in question to find that K precedes L. I don’t have any other way of knowing that K comes right before L without scanning forward, sequentially, until I come across the right letter.

I really hope I’m not the only one who lacks this ability. But I think I’m not alone.

As children we learn our letters in order, from A to Z. Some of us memorize chunks, one following another, then put them together. Others, like me, learned with the help of a pre–18th century French folk melody sung with the lyrics of the alphabet. But both of these methods rely on forward–seeking, sequential rote memorization of not just the letters themselves, but crucially, the order of those letters.

Most of us, I imagine, split the alphabet into sections. We probably don’t do this consciously, but when you’re learning the alphabet as a small child, remembering the sounds and order of 26 seemingly–random symbols is a tall order. Breaking the string of letters into digestible pieces makes memorization and recall easier.

But as adults we suffer the consequences. My mental alphabet is still broken along those lines. When finding K before L earlier, I started scanning letters at H because H is the start of one of my alphabet sections:

A B C D E F G—H I J K L M N O P—Q R S—T U V—W X Y Z

(I’m probably not the only one with those divisions—it’s the damn song’s fault!)

We should stop teaching the alphabet sequentially. While I agree it’s important for us to share a common alphabet structure, there’s no reason at all to learn or teach the alphabet in its arbitrary order. We should instead present the alphabet to children as a doubly–linked list, teaching the following points about each letter:

  • the shape of the glyph,
  • the phonemes to which the sounds of this letter contribute,
  • the letter preceding this one, and
  • the letter following this one.

By memorizing the preceding and following letters of each alphabet member, children will automatically acquire the forward– and backward–structure of the alphabet. They’ll be able to recite the alphabet backwards without learning anything different, but more importantly, they’ll know the relationship of each letter to its neighbors. Their alphabet will be truly random–access.

If you’ll excuse me, I’m off to give it a try.

Context in Javascript

Over the last couple years I’ve learned that JavaScript is very easy to write badly and very hard to write well. Its storied, massively political history and unique position as the only source of interaction on the Web has led to a complicated syntax and implementation, with lots of gotchas and few accepted code conventions.

(There have been various small–time projects to use other scripting languages, like Ruby through a JRuby applet bridge. These have largely disappeared into irrelevance.)


It took me time and practice to learn that beneath a tangle of asynchronicity, callbacks, and ham–fisted abuses of jQuery lies a language every bit as powerful and expressive as, say, Ruby. But the freedom and ubiquity we enjoy knowing that our code can reach as–near–as–makes–no–difference one hundred percent of Web users comes at a price.

That price is a deep, blackened seam of incongruities and implementation differences, compromises and limitations. These can be hard to accept and harder to work around. But there are some nuances of JavaScript—the language itself—that cause developers grief too. Thankfully, these issues usually have real, concrete solutions.


Let’s talk about function context. Imagine we’re writing a web client for an app called Book Keeper, and we’re working on a module for fetching and managing a list of books and some user-generated favorites. It could look something like this:

BookKeeper.Library = {
    books:     {},
    favorites: {},

    init: function() {
        this.loadData('/books/list',     this.cacheBooks);
        this.loadData('/favorites/list', this.cacheFavorites);
        // ...
    },

    loadData: function( url, onSuccess ) {
        var xhr = new XMLHttpRequest();

        xhr.onreadystatechange = function() {
            if (xhr.readyState === 4 && xhr.status === 200) {
                onSuccess( xhr.response );
            }
        }

        xhr.open('get', url);
        xhr.send();
    },

    cacheBooks: function(books) {
        // process the returned library books
        this.books = books;
    },

    cacheFavorites: function(favorites) {
        // process the returned favorites
        this.favorites = favorites;
    }
}

That’s a simplified example, but will illustrate the problem. We have two objects to store data, an init function that calls other functions in the module, a loadData function that sets up an AJAX call to the server, and a pair of callback functions for processing Books and Favorites once we receive them. When we’re ready to load some data from the server, we can pass the loadData function a URL and one of the callbacks to execute when the data comes in successfully.

When the page loads, Book Keeper's startup function calls BookKeeper.Library.init(). Looks good. The AJAX call was made successfully and our server log shows the data was sent to the browser. Now let’s check the data by entering BookKeeper.Library.books in the JavaScript console, and—

{ }. The same empty object we defined above. Wait—what? We set this.books right there in the cacheBooks function! Shouldn’t that, y’know, set that data?


Function context in JavaScript, as addressed with the this keyword, is tricky. The simplest way to understand it is that this refers to anything to the left of the last dot in a function call. So BookKeeper.Library.init's context (and therefore its this value) is BookKeeper.Library. And that’s why our cacheBooks function failed. When it was invoked with onSuccess( xhr.response ), we had nothing to the left of the function name.

But how could we have? We can’t call BookKeeper.Library.onSuccess because no such function exists. onSuccess is a reference to the cacheBooks function, not that function definition itself. We need a way to specify the context for that function so its this value is what we expect.


JavaScript has a pair of native functions that do just that, and they differ only slightly. First is call. If we had written onSuccess.call(this) instead, the function referenced by onSuccess (cacheBooks in this case) will be called in the context of whatever this refers to at the moment.

Since the function calling onSuccess is BookKeeper.Library.loadData, the value of this passed through the call function is BookKeeper.Library. That means that when cacheBooks wants to save its data to this.books, it’s really storing it in BookKeeper.Library.books.

Perfect—but we forgot the response argument! The call function passes arguments through in a list, one for one (i.e. onSuccess.call( this, value1, value2, value3 )). So our fixed function looks like:

loadData: function( url, onSuccess ) {
    var xhr = new XMLHttpRequest();

    xhr.onreadystatechange = function() {
        if (xhr.readyState === 4 && xhr.status === 200) {
            onSuccess.call( this, xhr.response );
        }
    }

    xhr.open('get', url);
    xhr.send();
}

That ensures the onSuccess callback gets both the context and the argument it needs. The other function, by the way, is apply: it’s almost identical to call, but it accepts a context and an array of arguments rather than a list (i.e. onSuccess( this, [value1, value2, value3] )).


This is pretty basic stuff, but I’m embarrassed to admit how long it took me to embrace this part of JavaScript. In the next post I’ll talk about a different way of passing context using Underscore.js.

Free As In “Will”

A pair of related blog posts were floating around Hacker News today: first, from John Mueller on Petapixel, a short but nicely-written post titled This Photo Is Not Free, and second, a response from Tristan Nitot’s Standblog titled, as expected, This photograph is free.

Mueller adds up the cost of all the camera hardware and processing software he’s purchased in order to produce a beautiful sunset photo:

12 + 2500 + 1600 + 210 + 200 + 130 + 60 + 1200 + 200 + 500= $6,612

He concedes that, as a commercial photographer, his investment has produced much, much more than a single photo, but emphasizes that

…if you wanted to create it, from scratch, that is what is involved. So I consider it the replacement value if it’s stolen, or how much my lawyer will send you a bill for if it’s found being used without my permission.

That’s perfectly fair. Mueller purchased the equipment at personal expense, invested time and energy into practicing—and, presumably, mastering—the art of digital photography and post-processing, and as a result has not only the right to sell his work for any price he deems fair, but also a responsibility to do so. When a skilled craftsperson labors to produce a handmade product, he or she should under most circumstances charge for it so the work maintains a concrete value. I certainly do, because my work is worth it, and so is Mueller’s.

If you’re a maker too, remember that the value of your work is what you get from it. But that’s why I said you should charge “under most circumstances”. There is a valid scenario wherein you intentionally don’t ask a price for your work, and Nitot—rather clumsily—brings it up in his post, saying

…that’s a silly way of looking at things. I have taken literally thousands of pictures with this camera, so the actual cost is under 1 EUR per photograph.

Yes, that’s a true calculation, but it mostly misses the mark. Either you assign a fair monetary value to your work or you do not. Mueller assigns a value of $6,612 to each of his finished photos because that’s the raw value of the tools needed to produce it. But Nitot’s saying that each of his photos has a value of under €1 undermines his next point, that

I don’t regret giving this photo to people I don’t know. It has cost me a little, but brought me a lot more in return.. because I made it available to the world.

He goes on to claim that

I could have said that it cost me a lot of money. So I should not share it. And the picture would have stayed on my hard drive, far from the eyes of people who could “steal” it.

No. No, no. “Free” is a perfectly valid value to assign to his work! The value he receives from his photos comes in the form of satisfaction and generosity, knowing that free distribution of beautiful photos helps everyone. But Mueller’s approach, charging a fair price for a product of his training, talent, and investment—and fighting back attempts to use it for free—is equally valid.

…there is not a lot of money to be made anymore by taking pictures.

I disagree, and so does Mueller. If you want to debate the relative merits of public–domain contributions and for–profit works, you must know that both approaches are just as valid and fair as each other. The only difference is in what form the creator wishes to be compensated.

There will be a market for paid photos as long as there are professional photographers and people who use their services. If you choose to release your work for free, that’s wonderful, and I’m personally grateful for that, but don’t give your work away because you don’t think there’s monetary value in it.

The Key to Failure

My better half recently suffered a security breach. At some point, an industrious data thief was able to discover or sniff out her email address and password. Since she, like so many people, used the same simple, memorable password for several different online accounts, the thief gained access to her private information and forwarded incoming emails to a drop-box email account set up by the thief, in her name, at another email provider.

I can’t articulate how frustrated I am that this happens so often. In my former life as a Genius at Apple, I had to explain several times a week to rational, intelligent people that their privacy had been compromised and that they had to assume all their data had been exposed publicly. In the moments after revealing that a thief had been rifling through their personal information, people don’t appreciate a lesson on good password security. They don’t care if it’s their fault or not. What matters is that a system they had been told time and time again was safe had proven to be unsafe, and they paid the price by losing control over an important part of their private lives.

This is unacceptable.

Most of the world’s technorati won’t have experienced casual data theft, since we know how and why to force SSL/TLS connections to our web services, and we know about secure passwords and passphrases, and we know about public-key cryptography. Good for us—not so good for everyone else. There may not be a one-size-fits-all solution to this problem (if there is, I’d love to hear about it!), but there are steps we can take to help protect people from thieves. The trouble is, however, that it’s really hard to convince people that taking steps to protect their data is important until they suffer a breach themselves.

Still, we have to try. Setting someone up with a tool like 1Password is a good start, but you need to be thorough. If the recipient has a laptop, a smartphone, and an iPad, they’ll need 1Password set up on each device they use to log into accounts. That probably means three 1Password purchases and installations, and you’ll have to take the time to walk the user through its normal use and answer any questions he or she has.

Alternatively, you can take some time to talk about good password security. I’m convinced that using a memorable, multi-word passphrase (e.g. h1gh school 1993-hair! for Facebook) is categorically better and more secure than a heavily-munged, forgettable password of equivalent length (wrp4!f33$Amp-chcMM1993, anyone? No?). But it’s critical when using passwords of that length—and, of course, a different password for each account—that memorability is taken into account. The whole scheme falls apart as soon as the user forgets just one password using the new method. He or she will start to drift back into old habits, not because he or she doesn’t care about security, but because it works better and is less frustrating to just have a single password.

Our job, as good citizens of the technical world, is to teach other users to follow a password scheme which is 1) simple and 2) high-security. If we ignore either of those tenets we’re feeding our friends and relatives to the wolves.

Automating Git

Like many small software teams, we use Git for version control. Other than a bit of a learning curve, Git’s been a fabulous tool for us. I’ve noticed one rough edge it brings into my workflow, though: since Git is a collection of small, single-purpose tools (following the Unix philosophy), there are several steps involved in the process of synchronizing my code with the rest of my team. GitHub has implemented one solution to this problem with GitHub for Mac’s Sync feature, but I’ve noticed that it doesn’t quite fit my workflow.

At the moment, my team collaborates on individual Git branches, and each is numbered according to its release version. It’s probably not optimal, but that’s how we’re running right now. Since we don’t maintain a production-ready master branch into which our per-version work is merged, we don’t have a culture of branch-and-merge, branch-and-merge. That means that I use Git the simple way: when there are new changes on the remote server, I use git fetch to bring in other people’s commits, then git merge --no-ff to incorporate my changes into the public timeline, creating a merge commit rather than fast-forwarding.

I wrote a script to automate this process. It’s a bit amateurish, but it’s at least nice to have some automation.

I’ll be iterating on this script as time goes on, probably replacing the merge --no-ff step with a rebase -p as I start using branches more and more freely.

Small but so telling

While converting Piers.me to Typekit webfonts, I again stumbled across a silly little issue in Firefox’s CSS parsing. Anyone who develops for the Web will have dealt with this at some point. This CSS—

h1 {
    font: 4em "league-gothic-1", "league-gothic-2" sans-serif;
    letter-spacing: 0.02em;
    text-transform: uppercase;
    text-align: center;
}

—works fine in WebKit browsers and Opera. But in Firefox, the heading falls back to sans-serif. The reason’s pretty obvious: see where I’ve missed a comma between "league-gothic-2" and sans-serif? In Safari and Chrome and Opera, that little mistake is forgiven. The renderer, of course, knows that sans-serif is a special value referring to each browser’s default sans-serif font. Since it knows this, the missing comma can be implied. Firefox doesn’t let you get away with it, though, instead throwing away the previous font-family declarations.

This seems pretty indicative of Firefox’s general lack of attention to detail. It’s a bit slow and a bit sloppy, and Mozilla will need to tighten its grip if it wants to stay competitive against Chrome and Safari. Accelerating major release numbers—á la Chrome—won’t help with that.

Nodes & all that

I’ve spent some time lately getting to know Node.js. It’s a server-side event-driven I/O framework spread atop Google’s V8 Javascript engine, and for someone comfortable with JavaScript like myself, it’s intriguing. It feels hands-off, in a strange way.

But I’m gradually getting more and more comfortable with its quirks. At first, it seemed counterintuitive and strange to be writing server code in callbacks and event handlers. It’s starting to make a lot more sense. But Ankur Blogoyal makes a good point that while Erlang is built upon a foundation of event-driven I/O and asynchronous interprocess messaging, JavaScript doesn’t support either natively. As a result, the creators of Node.js must build these important features themselves.

Blogoyal says:

"Node.js appeals to people who already know Javascript and don’t want to learn a new language, not systems optimization people who can design software that load balances itself as well as Erlang."

I agree, but this statement misses the pragmatism of Node.js. It’s a crucial step in the evolution of modern server-side languages not because potential developers already know JavaScript, but because it brings a uniformity of design, a conceptual melding of back- and front-end development that can lead to better-built, more efficient web applications. JavaScript is the only choice for client-side browser scripting. Given that limitation, it makes sense for a robust, full-featured server-side JavaScript framework to exist. The Node.js people still need to implement interprocess JSON messaging and prove its scalability, but it’s all possible.

I’m excited to see more of Node.js as its adoption in the broader web community continues. A good sign is that its third-party development community is thriving. I’ll be working with it more (alongside Rails and Django, natch), and I’ll be sure to post interesting tidbits I come across.