This site provides the following access keys:

Brandan Lennox's

Articles (Page 4)

My Backup Strategy

A friend recently asked me to help make her backup process more sane. I wrote her a long e-mail, and now I’m rewriting it as a post here for future reference (because I recently forgot what I was doing and had to find that e-mail).

Requirements

  • Backups of 3 classes of assets:
    • Irreplaceable and frequently used (a.k.a., FU) — current projects, photos
    • Replaceable and FU — stock art, downloaded or ripped media
    • Irreplaceable and not FU — archived projects, other external systems (game consoles and such)
  • Few total drives
  • Always-on incremental backups for IFU data
  • Occasional snapshot backups for RFU and INFU data
  • Bootable clone of primary desktop
  • No NAS
  • No cloud
  • No DVDs

The last three are controversial, but that’s what we had discussed, so I’m sticking to it for now.

My strategy requires three drives (in addition to the drive in the computer).

Drive 0 — Primary Drive

The one in the computer.

Drive 1 — Incremental Backup of Drive 0

Time Machine backups that are always running for IFU data. Also storage for RFU data and any other disk-based caches you may need, since it’s always connected. I’m also adding INFU here as a simplifying constraint for Drive 3.

You can either partition the drive and tell Time Machine to use one of the partitions, or leave the whole drive as a single partition and just use folders to manage your other storage (which is what I do). Time Machine just needs its Backups.backupdb folder, and it doesn’t care what else is on the drive.

One partition is simpler to manage, but it allows Time Machine to fill the whole drive. You might want to limit the size of the Time Machine backup (do you really need weekly backups from 6 years ago?), which you can only do by limiting the size of the partition it’s on, but you have to guess the right sizes for your partitions. I don’t see tremendous benefit to either scheme. The platter is spinning just as much either way.

It shouldn’t be a problem to leave this drive connected all the time so that Time Machine can run. I’ve been buying SeaGate drives for a while. They’re fast and you can’t hear them over the presumed fan noise from your computer.

Frequency: continuous
Capacity: 2× Drive 0, plus whatever you need for asset storage

Drive 2 — Bootable Clone of Drive 0

This SuperDuper! clone will be the same IFU data as your Time Machine backups (if they both ran at the same moment), except you can boot from it in emergencies, or if you just want to boot your OS on a different piece of hardware for the hell of it.

You can actually run SuperDuper! and Time Machine on the same drive. I considered having two bit-for-bit identical drives that you would leave plugged in all the time that both had Time Machine and SuperDuper! backups, sort of like a crappy RAID 1, so that if one drive fails you have another identical drive. But then one virus that takes out both drives leaves you with no backups.

Frequency: every sometimes
Capacity: equivalent to Drive 0, since it’s a clone

Drive 3 — Non-Bootable Clone of Drive 1

I keep this drive off-site and only back it up about once a month. Here’s my logic:

  • Your IFU data exists on Drive 0, Drive 1, and Drive 2. Makes sense, since it’s the most important.
  • Your RFU data exists only on Drive 1. But it’s R, so it’s just inconvenient, not catastrophic, if you lose that drive.
  • Your INFU data exists only on Drive 1. It’s I, so you don’t want to lose it, but it’s not changing anymore, so occasional snapshots for backup are good enough.

So snapshots of Drive 1 are a good enough backup for your RFU and INFU data, and you would have to lose 3 drives before you needed to get IFU data from here.

Frequency: occasional sometimes
Capacity: equivalent to Drive 1, since it’s a clone

That’s It

So far, I’ve never had to recover from anything catastrophic, but I’ve used my backup drives for convenience many times. This strategy has worked well for me, and it’s ultimately only cost me a few hundred dollars over the course of about a decade.

Extremely Large File Uploads with nginx, Passenger, Rails, and jQuery

We have to handle some really frackin’ huge uploads (approaching 2 TB) in our Rails-Passenger-nginx application at work. This results in some interesting requirements:

  1. Murphy’s Law guarantees that uploads this big will get interrupted, so we need to support resumable uploads.
  2. Even if the upload doesn’t get interrupted, we have to report progress to the user since it’s such a long feedback cycle.
  3. Luckily, we can restrict the browsers we support, so we can use some of the advanced W3C APIs (like File) and avoid Flash.
  4. Only one partition in our appliance is large enough to contain a file that size, and it’s not /tmp.

For the first three requirements, it seemed like the jQuery File Upload plugin was a perfect fit. For the last, we just needed to tweak Passenger to change the temporary location of uploaded files…

Many Googles later, I realized that option is only supported in Apache and my best bet was the third-party nginx upload module. But its documentation is fairly sparse, and getting it to work with the jQuery plugin was a lot more work than I anticipated.

Below is my solution.

nginx and the Upload Module

The first step was recompiling nginx with the upload module. In our case, this meant modifying an RPM spec and rebuilding it, but in general, you just need to extract the upload module’s tarball to your filesystem and reference it in the ./configure command when building nginx:

./configure --add-module=/path/to/nginx_upload_module ...

Once that was built and installed, I added the following section to our nginx.conf:

# See http://wiki.nginx.org/HttpUploadModule
location = /upload-restore-archive {

  # if resumable uploads are on, then the $upload_field_name variable
  # won't be set because the Content-Type isn't (and isn't allowed to be)
  # multipart/form-data, which is where the field name would normally be
  # defined, so this *must* correspond to the field name in the Rails view
  set $upload_field_name "archive";

  # location to forward to once the upload completes
  upload_pass /backups/archives/restore.json;

  # filesystem location where we store uploads
  #
  # The second argument is the level of "hashing" that nginx will perform
  # on the filenames before storing them to the filesystem. I can't find
  # any documentation online, so as an example, say we were using this
  # configuration:
  #
  #   upload_store /tmp/uploads 2 1;
  #
  # A file named '43829042' would be written to this path:
  #
  #   /tmp/uploads/42/0/43829042
  #
  # I hope that's clear enough. The argument is required and must be
  # greater than 0. You can see the implementation here:
  #
  #  http://lxr.evanmiller.org/http/source/core/ngx_file.c#L118
  upload_store /backup/upload 1;

  # whether uploads are resumable
  upload_resumable on;

  # access mode for storing uploads
  upload_store_access user:r;

  # maximum upload size (0 for unlimited)
  upload_max_file_size 0;

  # form fields to be passed to Rails
  upload_set_form_field $upload_field_name[filename] "$upload_file_name";
  upload_set_form_field $upload_field_name[path] "$upload_tmp_path";
  upload_set_form_field $upload_field_name[content_type] "$upload_content_type";
  upload_aggregate_form_field $upload_field_name[size] "$upload_file_size";

  # hashes are not supported for resumable uploads
  # https://github.com/vkholodkov/nginx-upload-module/issues/12
  #upload_aggregate_form_field $upload_field_name[signature] "$upload_file_sha1";
}

That’s a literal copy-and-paste from the config. I’m including the comments here because the documentation wasn’t as explicit as I apparently needed it to be.

Some important points:

  • Valery Kholodkov, the author of the upload module, has written a protocol defining how resumable uploads work. You should definitely read it and understand the Content-Range and Session-Id headers.
  • I can’t find any documentation on “nginx directory hashes”. That comment is the best I could do to explain it.
  • Once the upload is completely finished, the module sends a request to a given URL with a given set of parameters. That’s what upload_set_form_field and upload_aggregate_form_field are for, so you can make the request look like a multipart form submission to your application.
  • The module supports automatic calculation of a SHA1 (or MD5) hash of uploaded files, presumably implemented as a filter during the upload to save time. I would’ve liked to have that hash passed to Rails for verification of the file, but it’s unsupported for resumable uploads. I’m leaving that setting commented out for future developers’ sakes.

At this point, I was able to use curl to upload files and observe what was happening on the filesystem. The next step was configuring the jQuery plugin.

The jQuery File Upload Plugin

This plugin is extremely full-featured and comprehensively documented, which was exactly the problem I had with it. I needed something in between the basic example and the kitchen sink example, and the docs were spread over a series of wiki pages that I personally had trouble following. A curse of plenty.

Here’s the essence of what I came up with (in CoffeeScript):

# We need a simple hashing function to turn the filename into a
# numeric value for the nginx session ID. See:
#
#   http://pmav.eu/stuff/javascript-hashing-functions/index.html
hash = (s, tableSize) ->
  b = 27183
  h = 0
  a = 31415

  for i in [0...s.length]
    h = (a * h + s[i].charCodeAt()) % tableSize
    a = ((a % tableSize) * (b % tableSize)) % (tableSize)
  h

sessionId = (filename) ->
  hash(filename, 16384)

$('#restore-archive').fileupload

  # nginx's upload module responds to these requests with a simple
  # byte range value (like "0-2097152/3892384590"), so we shouldn't
  # try to parse that response as the default JSON dataType
  dataType: 'text',

  # upload 8 MB at a time
  maxChunkSize: 8 * 1024 * 1024,

  # very importantly, the nginx upload module *does not allow*
  # resumable uploads for a Content-Type of "multipart/form-data"
  multipart: false,

  # add the Session-Id header to the request when the user adds the
  # file and we know its filename
  add: (e, data) ->
    data.headers or= {}
    data.headers['Session-Id'] = sessionId(data.files[0].name)

  # update the progress bar on the page during upload
  progress: (e, data) ->
    updateProgress(data.loaded, data.total)

Unlike the nginx config above, this example leaves out a lot of application-specific settings that aren’t relevant to getting the plugin to work with nginx.

Some important points:

  • I decided to use a simple JavaScript hashing function to hash the filename for the Session-Id. It might not need to be numeric, but all the nginx examples I read used numeric filenames, and the Session-Id is used directly by nginx as the filename on disk.
  • As noted in the comment, the response to an individual upload request is a plain-text byte range, which is also present in the Content-Range header. The plugin uses this value to determine the next chunk of the file to upload.
  • This means that in order to resume an upload, the first chunk of the file must be re-uploaded. Then nginx responds with the last successful byte range, and the plugin will start from there on the next request. This can be momentarily disconcerting, since it looks like the upload has started over. Set your chunk size accordingly.
  • You must set multipart: false for resumable uploads to work. I missed that note in the protocol, and I wasted a lot of time trying to figure out why my uploads weren’t resuming.

At this point, I could interrupt an upload, resume it by simply uploading the same file again, and I had a lovely progress bar to boot. The last step was making sure Rails worked.

Rails

All the hard work has been done by the time Rails even realizes somebody’s uploading something. The controller action looks exactly like you’d expect it to:

class ArchivesController < ApplicationController
  def restore
    archive = RestoreArchive.new(params[:archive])

    if archive.valid? && archive.perform!
      head(:ok)
    else
      render json: { errors: archive.errors.full_messages }, status: :error
    end
  end
end

The view suffers a bit, since the jQuery plugin wants to own the form and nginx has its configuration hard-coded:

<!-- The fileupload plugin takes care of all the normal form options -->
<form>
  <input id="restore-archive" type="file" data-url="/upload-restore-archive">
  <%= button_tag 'Upload and Restore', id: 'restore-upload-button', type: 'button' %>
</form>

That’s about it.

Success!

It was pretty sweet once it worked, but the journey was arduous. Hope this helps some people.

Who Deserves My Money?

I recently backed App.net after years of wishing I understood Twitter. Much has been said about App.net’s pricing structure — $50 per year to be a member — and it got me thinking about which other services I pay for on either a monthly or yearly basis. Who am I happy to pay? Who do I pay because I have to?

These are the services I thought of in approximately the order I thought of them:

  • AT&T U-Verse ($48) — 12 Mbps↓, 1.5 Mbps↑. No television or landline service, although they snail mail me once a week with tales of the money I’d save by bundling said services and…paying them more money.
  • AT&T Wireless (~$70) — 200 MB of data, ∞ minutes of voice, no text messages (I send essentially all iMessages). Of the major carriers, this is among the cheapest iPhone post-paid plans that I’m aware of. I’ll be evaluating my options soon since my contract has expired.
  • GitHub ($7) — five private repos, one private collaborator. I don’t write much open source code anymore, but I do occasionally deploy a few private sites with Capistrano, and we use it at work all the time.
  • Instapaper ($1) — far and away the best return-on-investment of all these services. I read in Instapaper almost every day on all my devices. It’s indispensible.
  • Site5 (~$4) — basic shared hosting plan. It’s where this site lives. Shared hosting keeps getting cheaper, but I got tired of changing providers a few years ago, so I continue giving my money to these guys. They’re good.
  • Railscasts ($9) — Ryan Bates deserves my money. He deserves everyone’s money.
  • Typekit (~$5) — Portfolio plan. So far, I’m actually only using this on my resume. (How could I not take advantage of Brandon Grotesque?)
  • Hover (~$1) — one domain registered (you’re looking at it). I like Hover, at least as much as I’ve used it.
  • App.net (~$5) — standard user account. Not being a Twitter user, I don’t have much to say about it re: Twitter. I can follow here practically everyone I’d follow there, and I like that Dalton is making money without advertisers.

This clearly doesn’t include one-time digital content purchases like software and music, but it’s fascinating to see that I pay about three times as much money to AT&T for access to the content I’m interested in than I do to the service and content providers themselves, and to consider how reluctant we web users are to pay even a minimal cost for a service we might love (like Instapaper) while inundating the infrastructure companies with hundreds of our dollars a month.

Still a Happy Mac User

A storm took out my power for about two hours while working from home last Friday. It was a minor annoyance since I was working over the VPN and couldn’t finish up what I was doing, so I decided to buy a CyberPower UPS from Amazon. Between Friday and today, when I was finally able to unbox it and plug it in, I had lost power three more times for a total of fifteen hours and burned up my cable modem. But that’s not important.

The UPS came with a CD of software. With no intentions of installing said software, I glanced at the instructions for Windows and OS X:

Instructions for using a CyberPower UPS on Windows and OS X

Here’s what they say, paraphrasing slightly:

Windows Users: Installing PowerPanel® Personal Edition

When you first get a new CyberPower UPS, you’ll need to install some software on your computer to control your UPS and begin using it.

  1. Place the CD in your CD drive and wait for the setup wizard to begin. If the wizard does not begin, go to your CD drive in “My Computer” and open the “PowerPanel® PE” folder and double click “Setup.exe”.
  2. Follow the instructions on your screen and complete the installation. The default settings offered by the installation wizard are acceptable for most users and can be changed at any time if necessary.
  3. After the setup is complete, plug the USB cord from your CyberPower UPS to an available USB port on your computer.
  4. You are now ready to begin using the PowerPanel® Personal Edition software.

Mac Users: Configuring the “Energy Saver” UPS Function

When you first get a new CyberPower UPS, you’ll need to configure the Mac UPS function to control your UPS and begin using it.

  1. Plug the USB cord from your CyberPower UPS to an available USB port on your computer.
  2. Go to “System Preferences” and open the “Energy Saver” control panel.
  3. Select settings for “UPS”. You are now ready to configure the settings for the UPS.

No third-party bloatware to install or plastic disc to lose; just a single setting in System Preferences. I’m not as crazy about Apple’s software as I used to be, but I’m still more than happy to stay away from Windows.

GoDaddy Scumbaggery

Many moons ago, I registered a domain at GoDaddy. I knew how to navigate their shit-tastic UI from my client work, and they were at least marginally less reprehensible back then. Since I planned to use the domain for a while, I stored my credit card information and turned on auto-renewal.

Times change. I no longer need the domain, and GoDaddy sucks. A few months ago, they started sending me auto-renewal reminders and warning me that the credit card on file had expired. I knew that, and I didn’t care about the domain, so I just ignored the e-mails figuring that the charge would fail and the domain would be released. I’m nothing if not passive-aggressive.

Today, while unsubscribing from unwanted e-blasts collected in my spam folder, I saw a notice for a successful auto-renewal of that domain. I never updated my card’s expiration date at GoDaddy. I thought they might have been lying or joking, but the charge showed up on my account. Shocking, no?

I have to assume that they guessed the new expiration date of my card. It seems like my expiration dates advance three years at a time, so maybe that’s the first thing they try with expired cards. I don’t know if this is generally accepted practice, but it disgusted me.

My time is worth more than the $12 charge, so I’m not going to dispute it. I did release the domain and remove my payment information, and if I could figure out how, I’d cancel my account altogether.

Fuck those guys.

“To iterate is human…”

“…to recurse, divine.”

That’s one of maybe five of these fifty programming quotes I can recall at will. I actually used it today. Super proud about this one!

At work, our thing uses a tree structure with a Rails model like this:

class Node
  has_one :parent, :class_name => 'Node'
  has_many :children, :class_name => 'Node', :foreign_key => :parent_id
end

We needed the “path” from any given node back to the root of the tree. It was originally implemented as a named scope and an instance method that called that scope:

scope :path, lambda { |id| 
  {
    :select => 'parent.*',
    :joins => ', nodes AS parent',
    :conditions => ['nodes.left BETWEEN parent.left AND parent.right AND nodes.id = ?', id],
    :order => 'parent.left'
  }
}

def path
  self.class.path(id)
end

It’s was performing terribly on large data sets. Not only was the query performance bad, but we were only ever using the return value of the named scope as an array. There was no need for all the overhead of a named scope. We determined that it would actually be faster to hit the database n times for a node at depth n than to join all the ancestors in one query.

My first thought was to collect the node’s parents in a loop:

def path
  node = self
  parents = [node]
  while parent = node.parent
    parents.unshift parent
    node = parent
  end
  parents
end

It worked. Tests passed. Performance was astronomically better (down to 1ms from 270,000ms). I almost checked it in, but I hesitated. I heard the faint voice of L. Peter Deutsch, snickering at me in that way he almost certainly must.1 “Loops?” he said. “Gfaw.”

In about two minutes, I came up with this:

def path
  parent.path << self rescue [self]
end

Shit! I mean, the syntax might be a little wonky if you don’t read Ruby, but that’s magic. From a named scope with a lambda doing some kind of SQL query that stumped three professional programmers2 to one beautiful tail-recursive line. I’m patting myself on the back pretty hard!

Footnotes

  1. If your quip about programming made it into a well-known list, you probably snicker at bad programmers.
  2. I still don’t know why we were filtering by left and right values and an ID.

My Pet Theory on Popular Music

Kurt Andersen wrote a great article at Vanity Fair detailing how American culture hasn’t changed in the past two decades. He goes into a lot of detail, covering art, fashion, industrial design, architecture, but I’ve noticed this specifically about music, or at least popular music. No genre has dominated music — and consequently style — since grunge and gangster rap in the early 90s.

Some Loose Reasoning

Since the gramophone became the dominant commercial recording format in the early 1900s, popular musical styles have come and gone very nearly with each passing decade:

  • 1920–40: swing dancing and big-band jazz groups, like Lawrence Welk, Duke Ellington, and Count Basie
  • 1940–50: the decline of swing, the rise of bebop and cool jazz (Bird, Diz, Miles)
  • 1950–60: rock-n-roll (Elvis, Chuck Berry, Jerry Lee Lewis)
  • 1960–70: hippie and counter-culture rock (The Beatles, The Grateful Dead, Dylan)
  • 1970–80: disco (Village People, The Bee Gees, Kool and the Gang)
  • 1980–90: new wave and hair metal (The Cure, Aerosmith)
  • 1990–95: grunge and gangster rap (Nirvana, Dr. Dre)
  • 1995–present: ?

These are generalizations. Yes, the 70s produced more music than just disco. Yes, rap was around long before Dr. Dre. Yes, pop-punk had a good run in the late 90s.

But I’m looking at Top 40 cultural phenomena. When you think 70s music, you think disco. When you think 90s music (and you’re white), you think grunge. When you think 00s music, you think…of nothing in particular.

What happened?

Further Analysis

When I bring this up, a lot of people tell me that “pop” is the defining genre of the 00s, but that’s some kind of weird recursion. “Pop music” is exactly that: popular music. We only call Ke$ha “pop” because she doesn’t evoke a better description. Other than production quality and vernacular, how different is a Lady Gaga song from a Madonna song from a Gloria Gaynor song? “Pop” is too generic to incite a movement. It’s just furniture.

What else? R&B hasn’t changed. Rock hasn’t changed. Country hasn’t changed.1 Electronic hasn’t changed. And I can’t think of a single genre in Top 40 music today that wasn’t around twenty years ago.

Can you imagine Nickelback on MTV in 1992? Of course. Can you imagine Soundgarden on American Bandstand in 1972? Not one bit.

Theory

Andersen makes several strong arguments in his piece, this being my favorite:

In some large measure, I think, it’s an unconscious collective reaction to all the profound nonstop newness we’re experiencing on the tech and geopolitical and economic fronts. People have a limited capacity to embrace flux and strangeness and dissatisfaction, and right now we’re maxed out. So as the Web and artificially intelligent smartphones and the rise of China and 9/11 and the winners-take-all American economy and the Great Recession disrupt and transform our lives and hopes and dreams, we are clinging as never before to the familiar in matters of style and culture.

As it relates to music, technology has been even more disruptive. Look at this chart from Business Insider:

Graph depicting the decline of revenues in the music industry from 1973–2009

Revenues dropped precipitously around the turn of the century, inarguably because of Napster and affordable high-speed broadband. But Napster was more than just a means of mindlessly stealing songs: it democratized the process of discovering music.

Of the people I know who regularly pay for music — whether it’s from iTunes, Amazon, record stores, or live shows — none of them listens to FM radio. There are so many more sources of music discovery now. You can:

  • shuffle your iPod,
  • subscribe to a podcast like All Songs Considered,
  • browse sites like PureVolume and SoundCloud,
  • listen to playlist generators like Pandora and last.fm,
  • watch independently produced music videos on YouTube, or
  • exist on a social network.

We have such sophisticated artificial and human recommendation engines. Why would you listen to a generic radio station that’s required by contract to rotate the same nonsense day in and day out?

The early 90s were the last stand for Clear Channel radio and big music. Back then, if they decided to make something huge, they could push it to every radio station and MTV affiliate in Western culture. Now our attention is too divided. They can push the songs, but the people who hear them aren’t the trendsetters. Maybe they’ll buy an album or a ticket, or most likely a single, but there’s just no culture behind the music anymore.

Onward

Plenty of artists are still releasing great music. I’m not arguing against that. Some of it even goes mainstream. I just don’t think that we’ll ever again witness a sea change in popular music like we did repeatedly throughout the 20th century, and we’re no worse for it.

Footnotes

Code Hygiene

The second law of software thermodynamics:

Your code tends toward higher entropy. Brandan Lennox, 2012

The Problem

The more code we write, the harder it is to tell useful code from useless code, and the longer it takes to turn the latter into the former.

The Amalgam

These examples all come from our Rails project at work.

  1. For three and a half years, the code, which isn’t public or user-accessible, still contained the default README and an empty doc subdirectory.
  2. We use Workling for background jobs. Mainline development on Workling stopped in 2009, and it’s no longer compatible with Rails 3.1. Same with Machinist.1 Same with Spork.2 Our adoption of a project is a death sentence.
  3. Until recently, we had four separate subdirectories housing executable scripts: bin, script, runners, and utils. Only script is Rails-sanctioned (and necessary).
  4. We have to execute asset precompilation in a custom Rails environment — i.e., we can’t use the production environment like you would expect. I don’t know why. In addition to that, one co-worker recently created his own environment for development. I guess he wasn’t happy with development or production, so he combined the two. So now we have 66% more execution environments than a normal Rails project.3
  5. Speaking of environments, we have a custom initializer for our test environment that adds 10,000 to the current autoincrement value for the id column of certain tables. That’s how we fixed a conflict between Postgres, fixtures, and Machinist. We’ve switched from Machinist to FactoryGirl, but we left that initializer in place because insert reason here.

The Stink

It’s not just a matter of refactoring. I find a lot of code that isn’t even executed anymore. It’s just taking up disk space and attention, but you can’t know that until you spend the time to figure out what it does.

And it’s not just a matter of compulsion. The problems I listed above really do hurt us:

  1. Okay, maybe not this first one, but…
  2. We missed a release date because of a bug in Workling that we didn’t find until the very end of the test cycle. By then, we didn’t have time to replace Workling with a current tool, so we hacked around the bug and released. Is that going to bite us once it’s out in the field?
  3. If I want to create a new executable script, which directory am I supposed to put it in? What’s the difference between a runner, a script, and a util? Who’s executing these scripts, and do they have a good reason for this arbitrary directory layout? Why am I having to make this decision?
  4. Why did someone have to create a precompile environment? How is it different from production? How do we know if a future version of Rails fixes whatever problem we had with asset precompilation?
  5. We spent so many hours fixing problems with Machinist that were never going to be fixed by its developer. But FactoryGirl seems to work fine. Do we still need to futz with these autoincrement values? It slows down our tests and represents a significant break from normal development. Is something else depending on that behavior, or can we get rid of it?

The Cleansing

I need to learn to balance my compulsion for pristine code with my requirement to produce. It wasn’t a big deal when I worked by myself because the projects were small, short-lived, and completely owned by me. Now, it’s not just a poor use of time, but it creates tension between you and me if I’m always rewriting your code without good reason.

In related news, I’m going to start writing a lot more code for myself :-)

Footnotes

  1. Stuck in 2.0.0.beta2 since July 2010.
  2. Spork 0.9.0 will be compatible with Rails 3.1, but 0.9.0.rc9 was released in June 2011 and somehow still hasn’t made it to final. The last commit on Github was November 8, 2011. Curiously, the only reason I know about Spork was a flurry of activity early last fall. What happened to it?
  3. Not to mention the selenium environment I recently removed. We haven’t run Selenium directly against our Rails app in at least the two years that I’ve been working there.

Simple Pleasures

iOS 5 has delivered my most anticipated feature: custom text tones. I can finally hear the Super Mario Kart coin ding every time I get a message:

I found a lot of low-quality MP3s of this sound (and the coin sound from Super Mario Bros., which is different). They all sucked. Thanks to BSNES and Audio Hijack Pro, I created a pristine copy. You can download the ringtone file in MP3 or AAC. 1

Footnotes

  1. iTunes requires the .m4r extension, but it’s plain old AAC.