Meta-programming with instance_exec

Rails makes heavy use of a declarative style around it’s codebase- for example the has_one and belongs_to declarations inside ActiveRecord (amongst others). These are just class methods defined on modules, letting Rails wire up relationships, but they read like fully-fledged statements within a mini-language:

class Student < ActiveRecord::Base  has_many :tutorials

We can take advantage of the same in our code- letting us write in a declarative-style (hopefully revealing stronger intent) and reduce the amount of code we write (by using declarations to meta-program for us). I posted a little while ago that we’d used such an approach for marking attributes as immutable.

Anyway, back to the title of the post- #instance_exec, it’s a method defined in Ruby Facets. Like it’s documentation says, it’s equivalent to instance_eval but also lets you pass parameters- roll on the meta magic.

Let’s say we’re writing a system to calculate the monthly salary payment for an employee. We want to be able to say what the payment is rather than how it’s calculated.

class FullTimeEmployee  include Employee  bill_at {|hours| 50.pounds_sterling * hours}

In the snippet above, we’re making a declaration - defining the relationship between an hourly rate and the number of hours the employee works. We can implement bill_at in Employee as follows:

module Employee  module ClassMethods    def bill_at &block      define_method(:calculate_bill) do |hours|        instance_exec hours, &block      end  ...

Our assertion could be:

assert_equal 500.pounds_sterling, employee.calculate_bill(5.hours)

DTrace in Leopard with Ruby probes

I’d read a while back that Apple were going to include Sun’s DTrace tool in the now newly released Leopard - underpinning Instruments (formerly known as Xray).

What I hadn’t noticed was that the build of Ruby 1.8.6 included in Leopard (patchlevel 36 with some extras) also includes Ruby probes from the lovely folks at Joyent. Apple have ported everything. So, out of the box, Ruby works with DTrace! Sweet!

From Sun’s page:

DTrace is a comprehensive dynamic tracing facility that is built into Solaris and can be used by administrators and developers to examine the behavior of both user programs and of the operating system itself. With DTrace you can explore your system to understand how it works, track down performance problems across many layers of software, or locate the cause of aberrant behavior.

Joyent have got some sample scripts in their Subversion repository, and, what looks like some nice extensions for Rails (DTrace probes can be fired from within Ruby). Blimey!

For those (running Leopard) and want to just see something, try this out. It’s a script that traces the number of times a method is called. Once downloaded, all you need to do is

$ chmod +x functime.d$ sudo ./functime.d -p <pid>

Replacing <pid> with the process id for your target Ruby process. Once you stop tracing (or your target process ends you’ll see some nice text formatted output. Including something like the following (from my current project):

Hash                     key?                     25123         15       384174Module                   each_object                  1     390824       390824Module                   included_in_classes          1     391025       391025Module                   reloadable_classes_with      1     391203       391203Module                   reloadable_classes           1     391835       391835ActionController::Routin load_routes!                 1     394864       394864ActionController::Routin reload                       1     398364       398364Class                    reset_application!           1     398947       398947Class                    reset_after_dispatch         1     400474       400474ActionController::Routin connect                     39      13192       514506Array                    include?                  3353        166       556605ActionController::Routin build                       47      13116       616459ActionController::Routin add_route                   47      13329       626488Array                    each                       874        780       682423Object                   load                         2     376432       752864Module                   silence                      2     391586       783172Kernel                   gem_original_require        28      43460      1216884Array                    select                     316       4106      1297580Array                    collect                    351       4124      1447524Module                   local_constants             91      17120      1557986Module                   new_constants_in            25      93885      2347139Object                   require                     50      58495      2924783

The columns (from left to right) are: Class, Method, Count (number of times method was called) and then the average and total micro seconds. The d script was one of the Joyent examples. Arstechnica also have a very cool example showing the call stack from a system call.

What’s more amazing is this doesn’t require any recompilation, special flags, running in special environments. It’s available from the off!

If all that’s whetted your appetite (and I’m not sure how it couldn’t), head on over to the DTrace how to for a little more explanation of how it all works.

Time to get stuck in and see whether anything emerges about my current project. Very cool stuff!

New Repository URL for Mephisto Flickr Plugin

I’ve recently had a few people email me about problems with the Mephisto Flickr plugin I wrote, that Liquid (the templating engine used in Mephisto) had changed the interface for the initializer.

I thought I’d fixed it a few times, and indeed I had. Just in the wrong repository! Sorry. I guess I’ve still not quite managed to get the hang of aliased virtual hosts on TextDrive shared hosting.

So instead you should take a look at doing

$ script/plugin install http://svn.oobaloo.co.uk/svn/mephisto_plugins/mephisto_flickr_photo_stream/trunk && mv vendor/plugins/trunk vendor/plugins/mephisto_flickr_photo_stream

Note that it’s now reading from svn.oobaloo.co.uk. Sorry for the confusion. I’ll try and figure out how to get engross.org pointing to the right location in the meantime.

Thanks again to everyone who’d emailed in with the fix, and sorry again.

Immutable ActiveRecord Attributes

The test:

def test_cannot_change_country_name_once_constructed  country = Country.new(:name => 'UK')  assert_raise RuntimeError do    country.name = 'USA'  endend

The class:

class Country < ActiveRecord::Base  extend ImmutableModel  immutable :name  ...

The implementation:

module ImmutableModel  def immutable(attr_name)    define_method "#{attr_name}=" do |new_value|      read_attribute(attr_name).nil? ? write_attribute(attr_name, new_value) : raise(RuntimeError, "You can't change #{attr_name} once it has been set")    end  endend

The joys of a wide-angle lens

On a recent holiday to Brussels I got to try out my new-ish Tokina 12-24mm f4 lens. I’ve managed to get a few pictures I’ve really enjoyed before now, but, I’m so glad I had it to hand in Brussels.

After a great evening out, walking back through Grand Place to our hotel I wanted to grab a night time shot. In the past, I’ve never managed to quite land one even with a tripod to hand, so I wasn’t hoping for much. But, leaning against some steps, hand-holding and firing a few shots in succession I took something I was really happy with. At ISO 500, f4 and only 1/10.

I was stoked! To anyone sitting on the fence of going wide-angle: go get one!

Testing Rails' page caching in RSpec

In preparation for a Ruby project I wanted to play around with Rails’ page caching a little over the weekend. Naturally, I wanted to find a way to ensure that my controller was writing files correctly so that Apache (or any other HTTP server) can then serve the static content directly (without hitting my Rails processes). Since I’m also digging RSpec at the moment, I wanted to find a way of expressing it nicely in my specifications. This is what I ended up with.

Here’s the relevant controller’s specification:

describe UsersController, "caching" do  include CachingExampleHelper  it "should cache new users page" do    requesting {get :new}.should be_cached  end
  it "should not cache the activation page" do    requesting {get :activate, :activation_code => 'blah'}.should_not be_cached  endend

The key part is the requesting line. That’s my extension that encapsulates the request that should be cached, so that we can both make the request and check that it ended up with a file being written to the filesystem.

module CachingExampleHelper  ActionController::Base.public_class_method :page_cache_path  ActionController::Base.perform_caching = true  def requesting(&request)    url = request.call.request.path    ActionController::Base.expire_page(url)
    request.call  end
  module ResponseHelper    def cached?      File.exists? ActionController::Base.page_cache_path(request.path)    end  end
  ActionController::TestResponse.send(:include, ResponseHelper)end

Note that we have to expose the pagecachepath method so we can calculate the location for where the cached files will get written, and then also explicitly enable caching. When we call requesting giving a block to the request we want to make (such as get :index to invoke the index action), we capture the block as a Proc, invoke it and then read the request’s path from the generated TestResponse object. This lets us then first expire the page (in case it was cached previously), and then make the request. Finally, we then check whether the cached file exists by mixing in a cached? method to the Response.

Unfortunately, the code above will make 2 requests per assertion. But, I’d rather have it read nicely for the time being. Any suggestions on how you could tidy this up further would be well received!

I’m not sure I’d want to run these tests all the time, maybe something a little lighter to ensure that we tell Rails to caches_page for relevant actions in our controllers, but as a definite re-assurance it seems ok.

C# Anonymous Delegates, MVP, and testing with NUnitForms

I’m not sure why, but I get the feeling that anonymous delegates in C# 2.0 haven’t really had much of a press and people aren’t really that aware of them.

Essentially, it lets you declare the delegate inline a la:

public delegate void MyDelegate();  ...  private void DoIt(MyDelegate codeBlock)  {    codeBlock();  }  public void AddInstrumentToPrice(string message)  {    DoIt(delegate { Console.WriteLine(message); });  }

Closures!

On my current project we’ve found some great use for them, in particular, once we’d started refactoring some GUI code into the Model-View-Presenter pattern as described in Michael Feathers’ Humble Dialog paper (pdf). This tidied a lot of the code up nicely, keeping the GUI code focused, allowing us to then focus a little more on improving the behaviour of the UI.

The Windows Forms UI needed to show a progress bar indicating how far it was through pricing all of the selected trades. But, with our pricing occurring within the same thread our UI was locking. So, you push the processing out into a worker thread and then you can update both, right? Wrong, because .NET will throw an exception should you attempt do this - because Windows itself doesn’t like it when you do.

The Control class includes an Invoke method that executes a Delegate on the UI thread. So, to ensure our UI updates execute on the UI thread we can write the following

public void calculateButton_click (object m, mouseeventargs e) {  this.Invoke(delegate { resultsList.Items.Add("a result!") });}

Testing With NUnitForms

Once we had that written, we had an additional problem - we now had code executing on different threads making it difficult to test through the GUI. Sure, we had a lot of tests that drove out the implementation of the Presenter, but, it’s always nice to know that the GUI is behaving, and you’re driving something end to end.

Our solution — use the Strategy pattern to extract out the behaviour around how code is executed in the GUI, we use an anonymous method/delegate to pass a code block to a class that either executes it on the same thread (for our tests), or, uses ThreadPool.QueueUserWorkItem.

So our end-to-end NUnitForms test might look something like

[SetUp]public void Initialisation() {  form = new SampleForm(new SameThreadWorker());}class SameThreadWorker : IWorker {  public void Do(WaitCallback block) {    block(null);  }}
[Test]public void ShouldShowResultInListAfterCalculating() {  ClickButton("calculateResults");  Assert.IsTrue(ListContains("resultsList", "My Result"))}

Which is handled inside the form by

public void calculateButton_click (object m, mouseeventargs e) {  worker.Do(delegate { resultsList.Items.Add("a result!") });}

And our default worker for the GUI? Something a little like

class UserWorkItemWorker : IWorker {  public void Do(WaitCallback block) {    ThreadPool.QueueUserWorkItem(block);  }}

Probably not the best way of doing it, but certainly better than most I could think of, and all thanks to anonymous delegates.

Are there rules in Software?

I was at lunch today with a few other ThoughtWorks folk. At the end of the meal Chris, George, and I were talking when George mentioned he’d been pondering some stuff with Mocking and Stubbing. George was questioning a suggestion that Mocks should not be used as a way of testing the edges of a system.

I replied along the same lines as my Extract Client Interface post (perhaps a little less eloquently) where I mentioned that writing code with Mocks encourages you to think about roles and interactions with collaborating objects things first, rather than getting buried under the weight of implementing everything in the world. And, perhaps more importantly, if you depend on something else, you’ve discovered an interface for what the client really needs.

If you do this for the edge of your system, you’ll end up discovering an interface that will get implemented with a facade or adapter that lets your code talk to the external system. But how do you test that? Should you never write an interaction-based test for that?

On our way out of the restaurant we carried on talking about these kinds of rules in software development - that you should never mock things you don’t own, conditionals are bad, regions are evil. Although there’s value in the statements, what’s more important is that people think about them. It’s like the agile manifesto, nobody says never write documentation, just that working software is valued more.

And, although I neglected to mention it at the time, it occurs to me that a recent experience on my current team is actually quite applicable to what we were talking about.

At present some of the code we’re working on integrates with a C++ library provided by another department. Fortunately, it exposes a couple of functions that let us dump it’s internal objects into an XML document. So, to test that we interact with the library in the correct way (across multiple function calls - state is maintained in a ‘cache’ behind everything) we assert on bits of the XML.

We’re essentially asserting on the internal state of an external library to ensure the correct interaction, and, when we updated to a new build of the library our build broke rather expectedly. We had an hour or two to fix up the tests when the schema of the XML changed. We’d depended on an external interface that we didn’t own and paid the price for testing our integration this way when the external system changed.

But, this same approach to testing (i.e. testing interaction rather than state) has also let us (to a degree) focus on implementing what’s important, and drive out what was previously considered a complex, black magic type development effort, into something more understandable and controllable. It’s by no means perfect, but as a small step on the way to development nirvana, it’ll do.

We might have done something perhaps a little evil, but the techniques and tools we use to discover better ways of writing code have helped us in a situation the rule would otherwise have prevented us. Sure we’ve had problems because we’ve been depending on things we don’t own, but, that’s a small price for the benefit we’ve gained - a clear step as we divide and conquer our way through.

Rules are important for providing insight into good ways of working. But, it’s always important to think and act intelligently. Sometimes things people say are a little over the top, but, they put questions in your head to challenge assumptions. It’s no good just looking for that next shiney pattern in a new book, or putting index cards on a wall. Those things may help, but they’re not an end, they’re a means. The key is intelligence and people (as has already been mentioned) continually adapting, learning, and improving.

There’s no such thing as a rule to rule them all. Well, except maybe that rule :)

New MacBook Pro

I’ve had a 15” PowerBook G4 for about 3 years now, its served me well but it was proving just too slow to run Aperture and Photoshop, something I was increasingly using it for as I started getting more and more into (digital) photography.

I could no longer resist the MacBook Pro’s following Apple’s recent updates to 2.4GHz cores and machines capable of taking 4GB of RAM. So, I popped into the Apple store and picked up a shiny new 17” MacBook Pro. I know I’ve mentioned that I’d never dream of carrying around such a behemoth and that I was looking forward to see whether rumours around an ultra-compact MacBook Pro materialised, but, I’m absolutely smitten.

Performance wise, it absolutely leaves the old G4 for dead. Loading Aperture takes a split-second (no kidding), compared to a good 20/30 seconds on the G4. Editing images was more of a slog, requiring a good deal of patience to make adjustments, wait for the rendering, tweak it back, wait for the rendering. It’s almost instant now. Plus, with the 17” screen I can fit everything on the screen I need, and that’s without going for the HD option (that was a little too extreme).

The machine is a little larger than the 15”, but, not hugely and, it feels around the same weight as the old G4! The screen is large, sharp, and way brighter than the G4. Smitten I say.

Finally, I also got Parallels running so I can do .NET work on it also (and on a large display). What really impressed me was how you can just tell it to go with an express install and it runs through an unattended install for you, no need to sit and wait for it all to happen, away you go.

To complete it all off, I also bought an Apple Airport Extreme so my 320GB LaCie external drive I’d been using for the odd backup is now shared over the network and I no longer have to keep it attached. Backing up to the vault from Aperture works exactly as if it were directly attached. Sweet. It’s a little slower, but so much more convenient. And, should I need more storage, just stick another drive onto a USB hub and away you go.

Just love it all when it works together!