C# Anonymous Delegates, MVP, and testing with NUnitForms

I’m not sure why, but I get the feeling that anonymous delegates in C# 2.0 haven’t really had much of a press and people aren’t really that aware of them.

Essentially, it lets you declare the delegate inline a la:

public delegate void MyDelegate();  ...  private void DoIt(MyDelegate codeBlock)  {    codeBlock();  }  public void AddInstrumentToPrice(string message)  {    DoIt(delegate { Console.WriteLine(message); });  }

Closures!

On my current project we’ve found some great use for them, in particular, once we’d started refactoring some GUI code into the Model-View-Presenter pattern as described in Michael Feathers’ Humble Dialog paper (pdf). This tidied a lot of the code up nicely, keeping the GUI code focused, allowing us to then focus a little more on improving the behaviour of the UI.

The Windows Forms UI needed to show a progress bar indicating how far it was through pricing all of the selected trades. But, with our pricing occurring within the same thread our UI was locking. So, you push the processing out into a worker thread and then you can update both, right? Wrong, because .NET will throw an exception should you attempt do this - because Windows itself doesn’t like it when you do.

The Control class includes an Invoke method that executes a Delegate on the UI thread. So, to ensure our UI updates execute on the UI thread we can write the following

public void calculateButton_click (object m, mouseeventargs e) {  this.Invoke(delegate { resultsList.Items.Add("a result!") });}

Testing With NUnitForms

Once we had that written, we had an additional problem - we now had code executing on different threads making it difficult to test through the GUI. Sure, we had a lot of tests that drove out the implementation of the Presenter, but, it’s always nice to know that the GUI is behaving, and you’re driving something end to end.

Our solution — use the Strategy pattern to extract out the behaviour around how code is executed in the GUI, we use an anonymous method/delegate to pass a code block to a class that either executes it on the same thread (for our tests), or, uses ThreadPool.QueueUserWorkItem.

So our end-to-end NUnitForms test might look something like

[SetUp]public void Initialisation() {  form = new SampleForm(new SameThreadWorker());}class SameThreadWorker : IWorker {  public void Do(WaitCallback block) {    block(null);  }}
[Test]public void ShouldShowResultInListAfterCalculating() {  ClickButton("calculateResults");  Assert.IsTrue(ListContains("resultsList", "My Result"))}

Which is handled inside the form by

public void calculateButton_click (object m, mouseeventargs e) {  worker.Do(delegate { resultsList.Items.Add("a result!") });}

And our default worker for the GUI? Something a little like

class UserWorkItemWorker : IWorker {  public void Do(WaitCallback block) {    ThreadPool.QueueUserWorkItem(block);  }}

Probably not the best way of doing it, but certainly better than most I could think of, and all thanks to anonymous delegates.

Are there rules in Software?

I was at lunch today with a few other ThoughtWorks folk. At the end of the meal Chris, George, and I were talking when George mentioned he’d been pondering some stuff with Mocking and Stubbing. George was questioning a suggestion that Mocks should not be used as a way of testing the edges of a system.

I replied along the same lines as my Extract Client Interface post (perhaps a little less eloquently) where I mentioned that writing code with Mocks encourages you to think about roles and interactions with collaborating objects things first, rather than getting buried under the weight of implementing everything in the world. And, perhaps more importantly, if you depend on something else, you’ve discovered an interface for what the client really needs.

If you do this for the edge of your system, you’ll end up discovering an interface that will get implemented with a facade or adapter that lets your code talk to the external system. But how do you test that? Should you never write an interaction-based test for that?

On our way out of the restaurant we carried on talking about these kinds of rules in software development - that you should never mock things you don’t own, conditionals are bad, regions are evil. Although there’s value in the statements, what’s more important is that people think about them. It’s like the agile manifesto, nobody says never write documentation, just that working software is valued more.

And, although I neglected to mention it at the time, it occurs to me that a recent experience on my current team is actually quite applicable to what we were talking about.

At present some of the code we’re working on integrates with a C++ library provided by another department. Fortunately, it exposes a couple of functions that let us dump it’s internal objects into an XML document. So, to test that we interact with the library in the correct way (across multiple function calls - state is maintained in a ‘cache’ behind everything) we assert on bits of the XML.

We’re essentially asserting on the internal state of an external library to ensure the correct interaction, and, when we updated to a new build of the library our build broke rather expectedly. We had an hour or two to fix up the tests when the schema of the XML changed. We’d depended on an external interface that we didn’t own and paid the price for testing our integration this way when the external system changed.

But, this same approach to testing (i.e. testing interaction rather than state) has also let us (to a degree) focus on implementing what’s important, and drive out what was previously considered a complex, black magic type development effort, into something more understandable and controllable. It’s by no means perfect, but as a small step on the way to development nirvana, it’ll do.

We might have done something perhaps a little evil, but the techniques and tools we use to discover better ways of writing code have helped us in a situation the rule would otherwise have prevented us. Sure we’ve had problems because we’ve been depending on things we don’t own, but, that’s a small price for the benefit we’ve gained - a clear step as we divide and conquer our way through.

Rules are important for providing insight into good ways of working. But, it’s always important to think and act intelligently. Sometimes things people say are a little over the top, but, they put questions in your head to challenge assumptions. It’s no good just looking for that next shiney pattern in a new book, or putting index cards on a wall. Those things may help, but they’re not an end, they’re a means. The key is intelligence and people (as has already been mentioned) continually adapting, learning, and improving.

There’s no such thing as a rule to rule them all. Well, except maybe that rule :)

New MacBook Pro

I’ve had a 15” PowerBook G4 for about 3 years now, its served me well but it was proving just too slow to run Aperture and Photoshop, something I was increasingly using it for as I started getting more and more into (digital) photography.

I could no longer resist the MacBook Pro’s following Apple’s recent updates to 2.4GHz cores and machines capable of taking 4GB of RAM. So, I popped into the Apple store and picked up a shiny new 17” MacBook Pro. I know I’ve mentioned that I’d never dream of carrying around such a behemoth and that I was looking forward to see whether rumours around an ultra-compact MacBook Pro materialised, but, I’m absolutely smitten.

Performance wise, it absolutely leaves the old G4 for dead. Loading Aperture takes a split-second (no kidding), compared to a good 20/30 seconds on the G4. Editing images was more of a slog, requiring a good deal of patience to make adjustments, wait for the rendering, tweak it back, wait for the rendering. It’s almost instant now. Plus, with the 17” screen I can fit everything on the screen I need, and that’s without going for the HD option (that was a little too extreme).

The machine is a little larger than the 15”, but, not hugely and, it feels around the same weight as the old G4! The screen is large, sharp, and way brighter than the G4. Smitten I say.

Finally, I also got Parallels running so I can do .NET work on it also (and on a large display). What really impressed me was how you can just tell it to go with an express install and it runs through an unattended install for you, no need to sit and wait for it all to happen, away you go.

To complete it all off, I also bought an Apple Airport Extreme so my 320GB LaCie external drive I’d been using for the odd backup is now shared over the network and I no longer have to keep it attached. Backing up to the vault from Aperture works exactly as if it were directly attached. Sweet. It’s a little slower, but so much more convenient. And, should I need more storage, just stick another drive onto a USB hub and away you go.

Just love it all when it works together!

Visual Studio Regions are Evil

I’m yet to see the point of them. For those that aren’t familiar with them, they’re a preprocessor directive that means inside your C# code you can write:

#region Propertiespublic String FirstName { get { return “Paul”; } }public String LastName { get { return “Ingles”; } }
#endregion

Then, inside the Visual Studio editor, you can expand or collapse whole regions of code.

In principle it’s the same as turning away and not looking directly at the big smelly pile of stuff, but rather cover it up in something that makes it look neater, or like not looking at your bank balance when you login to your online account.

Screen displays are pretty large these days, certainly enough for most reasonable pieces of code. So, the fact you need to put stuff in a region is not stopping-the-line, it’s a work-around. Rather than addressing the problem - you’ve got a big pile of code that could be more organised in code - by say, refactoring and improving the design, and thinking more about roles and responsibilities of classes instead of just dumping stuff places because that’s what’s being passed around). Instead, you organise your editing experience. Lovely.

It’s like Edit & Continue in the debugger, if you’re going to need to edit code as you debug, you’re spending too much time with the debugger.

Wii Rule

I don’t normally post stuff which isn’t development related, but after such a fun evening I couldn’t resist.

I popped into the Virgin Megastore on Oxford Street last Friday to purchase a copy of Singstar for a barbecue and party I was going to over the weekend.

Well, just inside the store were a few small boards advertising that they now had Wii’s in-stock. I headed upstairs to the game department to buy Singstar and as I was paying asked whether they still had any. They did, and a few minutes later I’d bought a Wii, an extra controller and a copy of Wario’s Smooth Moves.

I’ve never had so much fun playing on a games console before. After a particularly stressful day I ‘unleashed the fury’ in a surprisingly competitve game of Bowling with my flatmate. It was a very cathartic experience, plus I won 3 games to 2 :). Following that, our other flatmate broke from his work and joined us to play Baseball. He howled with laughter the first time he moved the controller around to see his Wii Mii making the same movements on screen. Watching one person throw to bowl, and the other then following shortly to swing for the ball was particularly funny.

As for Wario Smooth Moves, it’s the weirdest game I’ve ever played but also insanely addictive - pumping your hands up and down to pop the balloon, and moving your hands back and forth in a sawing motion to sweep the floor during a curling game are just 2 examples of the fantastically funny mini-games.

So far I’m thrilled… I can’t wait for the second Nunchuck controller and Mario Strikers Charged Football to arrive and try it out over the Internet.

All in all, probably the games console I’ve purchased and the one which everyone (including a skeptical flatmate) has been incredibly impressed with.

Short Feedback Loops

Short positive feedback loops in software development are important.

Looking at what happened in the past, we can suggest what could be done to improve the situation in the future. It’s fundamental to lean and agile methodologies. Indeed, the methodologies themselves encourage adaptation of the methodologies by teams to adapt the practices and processes locally.

Test Driven Development (and it’s variants and corollaries) are all about this feedback, they help you think about what you’re doing immediately by showing you straight away. For instance, you can decide there and then that you don’t like the name of something (perhaps it turns out it doesn’t describe the intent). Feedback from you using your code can improve how you design your code.

People strive for fast running tests - they help keep developers upbeat. Slow builds sap the patience and take focus away from writing code and introduce unnecessary context switching. Slow tests become a pain, and people find ways to try and get around having to run them. Feedback is discarded.

To my mind, this is exactly why trying to split off the writing of tests to separate developers is costly - tests are a valuable learning tool, they provide a continuous response to you.

Write a nice clean test that results in simple and expressive code feels good, bad test phrasing feels dirty. Take yourself away from the feedback and you lose the opportunity to gain those insights. It may be that you spend most of your time working with developer tests, but having acceptance tests up-front that can direct your effort provide a great communication tool.

Keep yourself close to the feedback.

A Neat Little Rails Testing Pattern

Inside my tests I try and keep everything I need inside the test, rather than depending on too many fixtures. That way I ensure they’re as readable as possible, and more importantly, obvious.

But, take the following example:

def test_total_is_due_now_if_return_is_less_than_14_days  booking = Booking.new(    :customer => customers(:valerie),    :collect_at => 1.day.from_now.to_date,    :return_at => 13.days.from_now.to_date,    :boxes => 2,    :collection_address => addresses(:one))  assert_equal Date.today.to_s, booking.final_payment_due_date.to_senddef test_payment_due_14_days_from_now  booking = Booking.new(    :customer => customers(:valerie),    :collect_at => 1.day.from_now.to_date,    :return_at => 20.days.from_now.to_date,    :boxes => 2,    :collection_address => addresses(:one))
  assert_equal 6.days.from_now.to_date.to_s, booking.final_payment_due_date.to_send

Well, everything’s in the test, but there’s a fair bit of duplication. What would be nicer, is to take the prototype booking above and just override certain values.

Instead we might write something like

def test_total_is_due_now_if_return_is_less_than_14_days  booking = new_booking_with(:return_at => 13.days.from_now.to_date)  assert_equal Date.today.to_s, booking.final_payment_due_date.to_senddef test_payment_due_14_days_from_now  booking = new_booking_with(:return_at => 20.days.from_now.to_date)  assert_equal 6.days.from_now.to_date.to_s, booking.final_payment_due_date.to_send
privatedef new_booking_with(args)  Booking.new(args.reverse_merge({    :customer => customers(:valerie),    :collect_at => 1.day.from_now.to_date,    :return_at => 15.days.from_now.to_date,    :boxes => 2,    :collection_address => addresses(:one)  }))end

Much nicer (to me anyway), particularly once you start adding more and more tests (and especially validation related ones).

Make a Big Impact in a Small Area

I’ve been re-reading Evans’ Domain Driven Design recently and, whilst reading one night last week on the train home, something stuck.

It is more useful to make a big impact on one area, making a part of the design really supple, than to spread your efforts thin.

Although I’ve been very lucky in working on things I enjoy, over the past week I’ve particularly enjoyed the work I’ve been a part of. We’ve been trying to make some improvements to a section of code that hadn’t had so many tests around it. Initially, these were used to drive a couple of small refactorings to improve the readability of the class and better separate the responsibilities it had.

So, as good developers some fellow team-members earlier sat down to add good tests around the current behaviour to ensure that whilst things were being moved around nothing broke.

And, because my teammates are all excellent developers, they (and me) spent time ensuring that the tests we wrote were nice and small, communicative and focused. During this time we made sure that our tests were treated with the same respect as the rest of our code, keeping them clean and readable. When the test code got ugly, we refactored our tests (and our design under test) to reflect a cleaner design.

We’ve really felt the benefit of all this work. We started a card that involved adding much more behaviour to the same area of the system. The gateway was getting some brains.

We were able to take the nice, focused, behaviour explaining tests they had started and extend and build upon them easily for the behaviour we were going to add. And, by adding some automated higher-level integration tests, we were able to get even more confidence that we were retaining the behaviour we wanted, and adding the new behaviour we needed.

Thanks to the loving care shown by the previous developers those of us following in the footsteps had a much easier time, and our new design is much more consistent, useful, and ultimately valuable.

Because we chose to make a big impact on a very small area of the system, we felt a large benefit on our productivity. Our tight focus helped us evolve a more supple and elegant design.

A Little Capistrano Recipe for Joyent Accelerator

I’ve started putting together some Capistrano goodness to help automate the process of getting a new application deployed onto a Joyent Accelerator using the default Apache 2.2/Mongrel stack.

The first cut is working (for me at least) and I’ve put up a SVN repository at http://svn.oobaloo.co.uk/svn/accelerator_recipe/ that people can grab the recipe from.


  • accelerator_tasks.rb contains the Accelerator specific tasks. This includes one for creating the Apache 2.2 proxying configuration file, and, the Solaris Service Management Facility configuration file (which lets the accelerator boot mongrel when necessary.

  • apache_vhost.erb is the template for the Apache configuration file

  • smf_template.erb is the template for the SMF configuration file

To use, export the files from SVN to your config directory. So, inside your application’s home directory do

svn export http://svn.oobaloo.co.uk/svn/accelerator_recipe config

Ensure you’ve got a working mongrel_cluster.yml configuration file, and if you haven’t already, create the capistrano deployment recipe cap –apply-to /my/rails/app

Then, add the following to the top of deploy.rb

require ‘config/accelerator_tasks’

and add two new properties (used to configure Apache for the new domain name)

set :server_name, “www.myapp.com”set :server_alias, “myapp.com”

See the Apache docs for more info about the ServerName and ServerAlias directives if you don’t know what they do.

Finally, change the restart task to use the new smf_restart task (which in turn calls the smf_start and smf_stop tasks):

task :restart do  smf_restartend

Once you’ve added everything to Subversion, you ought to be able to do cap setup to create the initial app directories, create your SMF configuration file, and, Apache 2.2 virtual host configuration file. The SMF configuration file goes into your application’s shared directory (that Capistrano creates). The Apache 2.2 configuration file goes to /opt/csw/apache2/etc/virtualhosts/myapp.conf.

Finally, try cap deploy to do a new release, and call restart which will try and start your Mongrel service. I’ve also created a svcs task that will show you all the services installed into the SMF. If your Mongrel instances don’t fire up, check the result by running svcs -x, or check out the /var/svc/log/network-mongrel-myapp-production:default.log and see whether there’s any info about why it didn’t start.

Disclaimer

This is pretty rough and ready, but, it works for me with a stock Accelerator. However, this recipe does assume you are deploying your web and application servers to the same accelerator. It shouldn’t be tough to fix it up to figure out how to deploy to different accelerators, but I haven’t done it yet. Also, I’m still not overly familiar with Capistrano so if I’ve made some glaring mistakes in how things are done, also, please let me know (this is more of a first guess than anything, I’m hoping people will correct me).

I’ve posted something similar to the TextDrive forum, check out that thread in case people post follow-ups there too.

Most importantly, if people do use it and fix stuff up, please, please send the fixes back and I’ll get them committed into SVN for everyone.

Thanks!

DNS Hosting Recommendations Requested

Dear Lazyweb,

I am in the process of consolidating a lot of the various VPSes and hosting accounts I’ve accumulated over the years. I’ve had a great time with RimuHosting and would still absolutely recommend them to people, but, I ended up hosting most of my apps at my MediaTemple GridServer account, leaving my VPS at Rimu to just serve up SVN and some static stuff for friends.

One of the things I love about RimuHosting is they provide a great little DNS administration area, providing realtime-ish DNS updates. I can host as many domains as I like, and it’s nicely redundant with a couple of servers across their various datacenters.

I’m sticking with my $15 entry-level Joyent/TextDrive account for email (roll on the IronPort filtering!) and simple site hosting for friends. I’ve got a Small Accelerator for deploying other apps I’m working on, and, this blog.

I only have a handful of domains, but, I’d like to have a nice clean consolidated interface for managing it all, allowing easy updates and management.

EasyDNS is the obvious choice (to my mind). But, it’s a little more expensive (at $20 per domain per year) than I’m prepared to go for. It would work out to over $120 for the year!

So, what else is there? Anyone have any good recommendations?