Curse You Apple

So it looks like I’ve managed to butcher the power adapter/power socket on my PowerBook. I’ve managed to keep it in absolute pristine condition for the last 2 years or so, but a few months ago I accidentally kicked the power adapter causing the cable to go flying out. No big deal, the green light came back on and all has been well.

Until now. For some reason, I now can’t get it to charge at all. I plug the cable in, nothing, no light comes on and no charge is being picked up.

Onto the Apple Store I go to figure out how much it’s going to cost to replace. £55. And that’s ignoring the fact that I can’t order one because it’s showing delivery of 2-4 weeks.

My only option is to hope the Apple Store on Regent Street have some, and that I can somehow get over there in the next evening or so. Oh, and that they carry them. And that it’s my power adapter and not my PowerBook. If kicking the cable causing it to come out of the socket kills the socket, that has to be a particularly duff piece of design.

Ah well, fingers crossed it all works out well. My Aperture and Rails dabblings will have to wait another day - I’ve just about used the last of my remaining charge to write this.

Extract Client Interface Refactoring

Think about how you refactor your code. You write the test, pass the test, then take a step back and see how you could improve things. It’s often not until you take that strategic view that you can decide whether you made the best decisions in the heat of battle. But, the beauty of the technique is you get that second look. Safely.

I was pairing a few days ago on my current project and my pair and I managed to work ourselves into a bit of a twist over a single test. The end result was a couple of headaches and over an hour and a half of time lost. But, it was the source of a lot of talking. One of the things Jeff (my pair) mentioned was something I’d never really thought of before, and a light clicked.

The way we refactor code is from the wrong side.

A key part of the power of Test Driven Development (and other such techniques) is that they force you to think about what you’re trying to do first, before you dive in and just start doing it.

Say you’re writing a data provider for a chart that should look at all the transactions for a client and plot percentages. Already you’ve made a statement about your code and what it’s interactions with some collaborating objects are. If you’re using mock objects to help write and design your code, chances are you’ll be writing expectations of how those dependent objects are to be used. The interface of your collaborating objects has been exposed through writing your test.

One suggestion in the 2004 paper, Mock Roles Not Objects (pdf), is that you should only mock types you own. It’s something I tend to forget, and will be trying harder not to in future. If you don’t, your test ends up being coupled to code you don’t own and an unrelated test could break should the external interface you depend on change. Instead, you write a thin wrapper that is defined closer to your need. This is one such use of mocks- proving you can integrate well with other code - but, more importantly, mocks free you into focusing on testing one thing at a time and specifying how you want to talk to other things.

All this is demonstration of a client interface-oriented approach.

As a developer sitting down to work, I’m interested in knowing how I can use other types for my own to get it’s job done. If I’m writing a chart data provider that plots the last 5 months worth of transactions, I’m not interested that I get a client or a market, I just want something that implements the interface I’ve defined through writing my test. I shouldn’t be using concrete classes from the domain, but rather interfaces for the things I’m interested in. The client interface. I don’t care about anything else.

The key thing of note is that it’s the client that determines the interface. It’s not about what I as a class do, but what others do with me.

However, of the refactoring tools and IDEs that I’ve used (shamedly small I know :), all of them are focused around the following steps:


  1. Extract an interface from the concrete class.

  2. Introduce selected members from the concrete class to the interface

  3. (Potentiall) replace dependent classes’ use of the concrete class with the interface.

Step 3 reveals the flaw. We’re saying that we’ll change the interface that was exposed to other classes, not that other classes ask us what they think of us.

In the previous example of a type that had been introduced as a result of writing some mock-based tests our need-driven interface has been changed from the wrong side. Our need-driven interface was evolved through writing how our type was going to be used. Only in such a need-driven situation are we able to understand the intent of the interface. Extracting an interface from the concrete class (the wrong side) results in us making decisions away from where the intention is. Bad.

Instead, imagine a situation where we have a concrete class that is collaborating with a number of dependent classes. Wouldn’t it be great if you could refactor from directly within that situation, when you’re working with the class under test and (most importantly) surrounded by intent. To have the refactoring tool show what you’re using on the classes and suggest you pull those into a client interface. You make the refactoring decision based on what your type needs from others, rather than what your other types can give you, and when you know what you’re actually using them for. When you understand what the object is.

If we’d had such a tool we could have spotted that we were mocking a third party library, and missing a valuable opportunity to see what we were truly dependent on, what we truly needed. We missed an opportunity to introduce a wrapper interface for what we actually needed. We weren’t really interested in a Market or Client to do our work, but rather just something that had Transactions. Our headaches would’ve been avoided!

Small Mephisto Flickr Update

Since I pay for a sizeable Strongspace account, I thought I’d try to put my free TextDrive account to some use. After install Mephisto and Rails 1.2, it turned out that I couldn’t get my plugin to work.

The updated code is back in the repository so all you should need to do is run

script/plugin install   http://www.engross.org/svn/mephisto_plugins/mephisto_flickr_photo_stream/trunk/

Incidentally, the attempt to use TextDrive didn’t work out too well - my Rails process was killed pretty shortly after I switched the DNS entry by the TextDrive process police. Since the new build of OpenSolaris is out, there should be changes afoot but that sounds like it’s not going to be for a while.

Online Aperture Backup

Now that I’ve fully embraced the RAW digital camera revolution courtesy of Aperture and my Nikon D200, I’ve started filling my PowerBook’s (now puny) 80GB drive with photos. My current workflow looks as follows:


  1. Take photos

  2. Get photos into Aperture and onto PowerBook

  3. Post-process adjusting white balance etc.

  4. Post to Flickr with FlickrExport

  5. Backup vault to external Lacie hard-drive

  6. Format CompactFlash card

  7. Start-over

I’m a relatively paranoid person- the RAW files are the only ‘negatives’ I have. Although I do backup JPEG copies of the pictures to Flickr (which is brilliant now there’s no upload limit), my only full backup is the Lacie hard-drive. Now I’ve not had many hard drives fail on me, but in my desktop machine I had a drive fail in the striped RAID setup I had and all is gone. If I were to lose the Lacie hard drive I’d be in real trouble.

I also try to keep the Lacie hard-drive and my PowerBook in different locations. Should either be stolen or damaged through fire etc., I still have the other.

But, that still doesn’t satisfy my paranoia. So, I’m going to try and add some online backup magic to the mix.

PhotoShelter

My first thought was to go with one of the professional photo archive sites such as PhotoShelter. They provide a free Aperture plug-in to export directly to the site, support RAW conversion (including for NEF files generated by my Nikon D200 - brilliant!), and are pretty reasonable price wise.

But, their terms of use seemed to imply they weren’t intended to be used as an archival service:

YOU ARE SOLELY RESPONSIBLE FOR CREATING BACK-UPS OF YOUR POSTED CONTENT

Plus, I never quite got to grips with their user interface, it rejected a couple of NEFs I uploaded, and more importantly, I’d prefer to just have a secondary backup of my Aperture vault, rather than individual images.

Bingodisk

Bingodisk is provided by the folks at Joyent (who now own TextDrive. It’s a simple service providing good chunks of storage over WebDAV for pretty cheap, with limited (but big) bandwidth transfer limits.

Knowing I could mount WebDAV in OSX pretty easily, and reading how it’s possible to trick Aperture into backing up onto a network drive, I figured it’d be worth a shot. Unfortunately, WebDAV doesn’t seem to be a particularly efficient way to work with Aperture - even on a pretty hefty broadband connection, it’s still very slow. So much so I gave up to find something better.

Strongspace

Strongspace seems ideal on almost everything except price. It provides SFTP/rsync support, a nice little browser to view your files. And, unlimited transfer.

Rsync seems a pretty natural thing to want to do, and doesn’t require me to think or change too much of my process. Just backup the vault when I want to, and backup only the changes. All I need do is run

rsync -rltvz /Volumes/LACIE/Lacie\ Vault oobaloo@oobaloo.strongspace.com:/home/oobaloo/

My current Aperture vault weighs in at around 16GB. The rsync is currently uploading at around 150KB/s, I make that around 29 hours worth of time left. But since it’ll only send changes in the future, it shouldn’t always be quite so painful.

Incidentally, does anyone else have good experiences with backup sites?

Mephisto Flickr Update

Thank’s to Nathaniel Brown for pointing this out.

A few days ago Flickr seemed to change their URL scheme for images served inside their RSS feed. The Mephisto plugin I wrote included the Flickr aggregation library from Typo. This does some regular expression parsing to determine the URL for other image sizes, but, because Flickr had changed their URL scheme, it broke.

Fortunately, the fix is pretty straightforward. Of course, first the test to show the new behaviour of Flickr:




12345678
def test_should_generate_correct_address_for_each_image_size  pic = FlickrAggregation::Picture.new(    :title => 'test',    :description => 'http://farm1.static.flickr.com/my_image_m.jpg'  )  assert_equal 'http://farm1.static.flickr.com/my_image_m.jpg', pic.image  assert_equal 'http://farm1.static.flickr.com/my_image_d.jpg', pic.medium

Running the test shows that we get an error - the regular expression fails to pick up the URL correctly:

NoMethodError: You have a nil object when you didn’t expect it!You might have expected an instance of Array.The error occurred while evaluating nil.first

So, we change our aggregation library to fix it as:




123
def image  description.scan( /(http:\/\/.*(static|photos).*?\.jpg)/ ).first.firstend

Run the test again and we’re green.

I’ve committed the changes into the Subversion repository. So feel free to get the update!

Incidentally, I wonder whether this is related to Ezra Zygmuntowicz’s post about speeding up page loads by serving images from cnames. Maybe it’s just coincidence.

Double Christmas Bonus

I noticed the other day that Flickr are now offering unlimited uploads for Pro accounts - previously it was set to 2 gigabytes. Not sure when that happened, but I only just noticed.

Not only that, but I’ve got a lovely shiny new Nikon D200 to fill it with! So far I’m pretty darn thrilled - from the couple of photos I’ve taken so far with it they seem to have a real, natural quality to them.

And since I’ve now got unlimited uploads, I can upload away to my heart’s content! Just need to start taking more pictures.

Incidentally, I also purchased a license for FlickrExport (for Aperture) too - I can now export to Flickr directly from Aperture! Oh, and I had a few problems initially because the full-resolution photos I were uploading were each over 10MB - Flickr’s limit per-file. Reducing the JPEG quality one stop for my export settings did the trick.

Parsing Parameters for Liquid Blocks

One of the first things I noted when I posted (some would say borderline complaining) about my first encounters with Liquid - the templating engine used in Mephisto and a few other apps - was the way it didn’t seem to abstract how parameters were set on the block.

Well, turns out it’s not quite so nasty as I first expected - you indeed don’t have to write all the regular expressions yourself. Inside liquid.rb a number of constants are defined, a lot of these are to do with capturing aspects of the tag.

One such regular expression is for TagAttributes, so we can write a test that:




12345678
def test_should_use_parameterised_url_for_feed  template = Liquid::Template.parse(    "{% flickrphotostream feed: http://blah/test.xml %}    {% endflickrphotostream %}")  first_block = template.root.nodelist.first      assert_equal 'http://blah/test.xml', first_block.feed_urlend

first_block is the first block that we encounter in our template - the Flickr photostream one. Inside our class, we can then build up an attributes hash from these items using the TagAttributes regular expression as follows:




123
markup.scan(Liquid::TagAttributes) do |key, value|  @attributes[key.to_sym] = valueend

Hey presto, we’re done.

Now, I’m not so sure I like this too much, it doesn’t feel very Rails-like. Too much dealing with regular expressions for my liking.

So, I’m going to continue refactoring my solution (please don’t read too much into the tests or code I’ve written so far, it’s very much a first stab over the course of a little hacking). As part of that, I’d definitely like to see if I can Rails-ActiveRecord-it-up a tad, a la:




12
class FlickrPhotoStream < Liquid::Block  attr :feed_url, :count

Maybe I’ll give that a go tomorrow.

Using, Not Just Trying Something (and Recruitment)

Jason Fried of 37signals just published the startings of a post he was writing about the difference between using and trying something.

It’s why agile lovers (such as myself) are so keen on the process. Not just because its a nicer way to work, everything is focused on feedback - from writing code test-first and using that feedback cycle to hone how you develop and design your code, to delivering early and often to customers to tune what you’re building.

It’s all very well just looking at screenshots, but its not until you try using it you see things. It’s why I’m loving seeing lo-fi prototypes in how we work - stickies on acetates that let you walk through the UI. Ultra-simplistic but uber-effective. In Jason’s words:

You don’t notice the quirks and shortcuts when you try something. Those revelations only come from real use. Eye candy shines during trial, but fades fast during use. Cool wears off quick, usefulness never does.

The last couple of weeks I was happy to be back in the ThoughtWorks office in London, working on various little utilities (and the Mephisto plugin). I was also fortunate enough to help out with some of our UK graduate recruitment.

As with Jason’s point above - it’s only when you sit and work with someone you get an idea about what they’re like. Both interviewee, but also interviewer - as the person being invited to an interview it gives a real insight into the types of people you work with.

After all, it’s all very well giving tests and asking difficult questions (which we also do), but actually working with something is a very different (and revealing) thing. But, just as importantly, it’s very refreshing.

Trying Out MediaTemple

At present, I primarily use a Virtual Private Server from RimuHosting.com. They are very affordable, and have a great reputation for support. I’ve been with them for well over a year and have had a brilliant experience with them so far. But, a couple of things sprung up that interested me so I couldn’t resist.

Firstly, MediaTemple announced their GridServer. Now, ignoring the academic definition of what constitutes grid computing or not, it’s a reasonably interesting offering - at least from a Rails point-of-view. Ben Rockwood (of Joyent/TextDrive fame) posted about their setup a while ago. But to paraphrase, it’s essentially lots of nodes fronting onto NFS, load balanced.

And it’s pretty easy to use, with their own commands for creating ‘containers’ which configures a Mongrel cluster and the Apache rewriting rules, as well as then starting/stopping containers etc.

But, going back to what Ben mentioned about it - while multiple nodes backed by NFS is fine for static content and PHP, and even works quite well, for Rails apps its a little less-so. Because these must be up-and-running in memory, these must exist inside a long-running Ruby on Rails container, and so are at least bound to a logical node. How they actually map onto Grid nodes is where I have no idea how it actually works, it would be interesting to see how it is distributed across the ‘Grid’.

Now, take this a step further (specifically with regard to the software I’m using - Mephisto) this kind of configuration is interesting because needing an up-and-running Rails container isn’t essential - since most of the time the content stays mostly static (I know - I’m a bad poster) and Rails’ caching generates HTML that is served directly by Apache (that sits in front of Mongrel, in the case of MT). Consequently, any node on the grid that receives a request will be able to serve the content direct from disk.

Theoretically, therefore, assuming traffic is predominantly skewed towards reading largely static content via. Mephisto, MediaTemple’s Grid is a good offering. Or, have I misunderstood something?

Of course, for different types of usage it’s potentially not so great - depending on how well the containers are spread and can take advantage of available resources. And, without knowing precisely how the grid routes requests to Rails containers I’m wary. Coupled with the fact that MySQL server problems have been rife as well as more serious problems with the whole grid disappearing it’s perhaps a little too early to be sure. But, updates are on the way next week so we’ll see.

Even more interesting is that I also read that TextDrive are updating their Shared Hosting sometime in the future to move along similar lines as their Accelerator product - a little closer to a more isolated system but with some shared resources (like database and email). Sounds very interesting, can’t wait to take a look when it launches and see how deploying Rails apps on it looks!

Flickr Plugin for Mephisto

I spent some time the other day putting together a plugin for Mephisto that would let me display a small selection of photos from my Flickr feed - you’ll see the results on the right side if you’re looking at this through my site.

Well, the first cut is done and seems to be working fine with a good response inside the Mephisto Google Group.

To install, you’ll need to change to your mephisto install’s directory and run

$ script/install plugin http://www.engross.org/svn/mephisto_plugins/mephisto_flickr_photo_stream/trunk/

then inside your template you’ll be able to use the following:

{% flickrphotostream feed: http://api.flickr.com/services/feeds/photos_public.gne?id=57966634@N00&format=rss_200 count: 6 %}  {{pic.title}}{% endflickrphotostream %}

I’m working on writing up some of the things I came across whilst I was working on it. I wasn’t happy with a few things (most importantly, the tests), and I’ve already started tinkering with the code again to improve it a little.

But, if you’re running Mephisto and want some Flickr goodness feel free to give it a try!