Rails Security Update and Typo

So it looks like there’s a security problem with a recent-ish version of Rails (well, anything older than the Edge as of a few weeks ago seem to be at risk). The hole has been described as

This is not like “sure, I should be flossing my teeth”. This is “yes, I will wear my helmet as I try to go 100mph on a motorcycle through downtown in rush hour”. It’s not a suggestion, it’s a prescription.

Fortunately, I’m running an up-to-date version of Typo which quite happily (at least currently anyway appears to) run against the latest Edge Rails (revision 4745).

How to Update Typo Rails

As of a few versions ago, Typo used to use the Rails gem. However, during the Rails 1.1 release a few shared hosts automatically installed the system-wide gem. This seemed to break a few applications, as sites ran code that wasn’t compatible with some breaking changes in Rails 1.1.

The solution was to freeze the version of Rails to 1.0 in the `vendor/rails` directory. Going forward, Typo was brought up-to-date against 1.1, and the repository was also changed to include an svn:externals link to the Rails trunk. The result is that all that is needed to do the update was

$ cd vendor$ svn up ...Updated external resource to revision 4745.

Ahhh, I can breathe easy! Back to the Typo Sidebar hackery (an article will be coming soon)...

UPDATE: I also took the opportunity to upgrade Typo itself, after which, I ran the usual `rake migrate`. Sorry, made a mistake in my original post.

Cost of Change


I was thinking about a discussion we had recently on our project at ThoughtWorks about how often our pairs rotate. We’re coming close to the last few iterations of our first release, and we’re reluctant to shift focus away from delivery. As a result, we’ve started switching pairs approximately once a day, less often than some would like.

The advantage of switching pairs more often being:

1. Spreads knowledge
2. Aids an emergent and coherent design
3. Ubiquitous language within the domain is honed quicker, and the model solidifies quicker (i.e. more people share the same world view sooner).
4. I find moving around far more invigorating, and contributes to the idea of an energised workplace.
5. Shared knowledge and shared code ownership – empowering people to share changes, rather than code being ‘owned’ by individual developers.

At present, we feel like there’s too much context to our stories, making swapping pairs frequently ineffective due to the cost of change – the cost of bringing the new person in the pair up to speed with current progress.

Our Iteration Manager made the point that if a few things were different, that would reduce the cost of context switching, and ultimately make it easier to switch pairs.

1. Smaller scoped/more vertically focused stories would mean the amount of context pairs have to absorb as part of their work would be reduced.
2. More frequent check-ins would enable pairs to move more easily, avoiding the need for pairs to stick together through merge hell where context can be important.
3. Smaller more focused groups would mean that people wouldn’t have to make large context switches. They could stay focused within functional areas, but switch frequently within to spread knowledge.

To me, this has a lot of similarities to the way Toyota (and other lean manufacturing followers) have approached the problem. Instead of attempting to get the greatest efficiency from a single manufacturing process, they instead seek to minimise the cost of change to their process. So, as their teams find new ways to organise and improve, they can do so quickly and efficiently. As new products come along, factories can re-tool faster than competitors.

In the XP world, with a high cost of change pair rotation is discouraged and consequently, problems strike with code coherency, ubiquitous language fragmentation, code duplication, and inconsistent design.

However, it goes beyond pairing. One of the most important things (to my mind) I’ve spotted with hindsight is how important refactoring (which is all about reducing the cost of change, right?) is to estimating and planning.


To me, the success of estimation within extreme programming (since that’s where the majority of my experience lies) relies hugely on the cost of change being as small as possible.

For instance, during the planning game developers estimate work based on high-level story requirements. Their estimates are based on past experience, gut feel, expert opinion and discussion. However, this is often done under the assumption that the existing code base is sufficient to allow a developer to work as they would want.

I’m sure everybody can think of examples where their stories which touched certain classes, or areas of the system, would be blown because of complexities within those areas (read: God Class). Or, those areas were not well maintained and the code wasn’t well factored, so to even get to a position where you could add value would involve some serious refactoring bordering on redesign.

To me, it is this which demonstrates the real ‘business benefit’ of refactoring (if it’s not been acknowledged as a primary principle of ‘professional’ software development) – it puts me in a better position to deal with tomorrow’s stories.

In short, without having well factored code, how can any individual developer feel like they can offer a decent estimate of the work without having had direct and recent involvement with the code? And even if they have, they may be in no better position anyway! In the past I’ve definitely had to introduce a risk multiplier to my estimates in the planning game when considering stories that touch ‘untouchable’ areas of the codebase.

Without refactoring, decent estimating becomes difficult (and more variable), making planning more difficult. And the cost of change at introducing new functionality, or changing design becomes that much greater, going against one of the key value propositions of lean approaches.

So, think about how you can reduce the cost of change as you develop, so you can change what you do when you need to – which is what agile and lean (to me, as a developer, is all about – making my working life more enjoyable through empowering me to change what I can).

I’m sure there’s more, but those are just the ones that sprung to my mind this afternoon.

Rails, Mongrel, Lighty and Mint

Configuring Mongrel

Firstly, I created a `mongrel_cluster` configuration as per the project’s documentation.

$ cd /var/www/servers/www.oobaloo.co.uk/currentsudo mongrel_rails cluster::configure -e production \-p 8000 -N 3 -c /var/www/servers/www.oobaloo.co.uk/current -a \--user mongrel --group mongrel

That’ll create a `config/mongrel_cluster.yml` file that contains the configuration. That mongrel should `cwd` to `/var/www/servers/www.oobaloo.co.uk/current` before running itself, and that it should create 3 nodes from port 8000 up inclusive.

So, to check it all worked, I then ran

$ sudo mongrel_rails cluster::start$ telnet localhost 8000Trying to localhost.Escape character is ']'.

That showed me the individual servers were up and running, so I then created a mongrel cluster configuration directory, to place configuration files for all my mongrel clustered sites (so far, just this one).

$ sudo mkdir /etc/mongrel_cluster$ sudo ln -s /var/www/servers/www.oobaloo.co.uk/current/config/mongrel_cluster.yml \/etc/mongrel_cluster/oobaloo.yml

Now, I can then configure mongrel cluster to launch via a System V init script, so it’ll start (and restart etc.) along with lighty and my other services. Rather nicely, mongrel cluster includes a script.

$ sudo cp /usr/local/lib/ruby/gems/1.8/gems/mongrel_cluster-0.2.0/resources/mongrel_cluster \/etc/init.d$ sudo chmod +x /etc/init.d/mongrel_cluster

I also then need to link that in to the various runtime init directories for my distribution (RedHat Enterprise Linux 4 I believe), so I did the following

$ sudo ln -s /etc/init.d/mongrel_cluster /etc/rc0.d/S84mongrel_cluster$ sudo ln -s /etc/init.d/mongrel_cluster /etc/rc3.d/S84mongrel_cluster$ sudo ln -s /etc/init.d/mongrel_cluster /etc/rc6.d/S84mongrel_cluster

That ensures that mongrel kicks off before Lighttpd does, ready for it to handle proxied requests.

Configuring Lighttpd

Lighty took a little more playing with to get a successful configuration. I still use FastCGI for serving PHP requests (solely for using Mint), but want to proxy any other requests through to my underlying clustered Mongrel nodes.

To do this, I did a little more regular expression trickery as follows in my `Lighttpd.conf`.

fastcgi.server = (".php" =>  ("mint" =>  ("socket" => "/tmp/oobaloo-lighttpd-php.socket",    "bin-path" => "/usr/bin/php",    "bin-environment" => (      "PHP_FCGI_CHILDREN" => "3",      "PHP_FCGI_MAX_REQUESTS" => "200" )      ))  )

$HTTP["url"] !~ "/mint.$" {  proxy.balance = "fair"   proxy.server = ("/" =>    (( "host" => "", "port" => 8000 ),    ( "host" => "", "port" => 8001 ),    ( "host" => "", "port" => 8002 )))}

That ensures that, by default, `.php` requests will be serviced by the FastCGI server, and anything not matching the `^./mint.$` regular expression (i.e. anything other than mint) will be picked up by the proxy to the 3 clustered Mongrel nodes.

That should tie it all together, so all that’s left to do is

$ sudo /etc/init.d/mongrel_cluster start$ sudo lighttpd start

And hey presto, the server’s up and all seems to be well. I’ve read somewhere that Lighty’s mod proxy isn’t too great right now, but that there’s some new stuff on the way, which appears to be for an upcoming 1.4.12 release. As soon as that’s out looks like I’ll have something else to update!

Until then, looks like I should also look at getting my Typo deploys up using Capistrano as per Geoff’s post, and then get my Capistrano configuration working with Mongrel.

Problems with Ruby Gems

I thought I’d finally take a look at Mongrel and see whether it was worth changing my configuration from Lighttpd/FCGI. So far it’s been pretty stable (with only the odd restart needed every few month or so), but Mongrel’s had such a good response I couldn’t resist.

However, it looks like Ruby Gems is failing to work on my VPS. Below is what I run and the output:

sudo gem install daemons   Password:   Bulk updating Gem source index for: http://gems.rubyforge.org   Killed

I was originally on an old version of Gems, so I downloaded the latest release (0.9.0), but same result.

If I run the same above on my PowerBook G4

pablo:~/work/mephisto/trunk pingles$ sudo gem install daemonsPassword:Attempting local installation of 'daemons'Local gem file not found: daemons*.gemAttempting remote installation of 'daemons'Updating Gem source index for: http://gems.rubyforge.orgSuccessfully installed daemons-0.4.4Installing RDoc documentation for daemons-0.4.4...

D’oh. Not too sure what’s causing it, and a quick poke around Google and Google Groups didn’t yield any information about log files etc. Does anyone have any suggestions?

Rails, a DSL?

I just read my colleague and teammate’s excellent post on the subject (and Jeremy Voorhis’s thoughts on it too), and although I added a comment, I felt it was worthy of a little expansion.

The first time I considered Rails as a DSL was during the recent ThoughtWorks roadshow, where both our founder and president toured around the various ThoughtWorks locations to give an update on our progress as a business, and opportunities and visions for the future.

During the talk, Roy talked a bit about what he was personally excited about for the future. One of the things he mentioned was the rise in the buzz around DSLs, and Intentional Programming. He mentioned how another ThoughtWorker (sorry, I forget who it was he mentioned – perhaps someone else remembers?) had explained the concept to him and cited Rails as an example of both a DSL and framework. George raised his hand at this point and very confidently explained how he felt that it quite plainly wasn’t a DSL, and I think his post does well to further his reasoning for his position.

George’s post (and also Matt’s comment beneath – another brilliant ThoughtWorker and team-mate) raised the trouble with trying to tie down where good design ends, and a DSL would begin

I find it rather hard to draw the line between a DSL and a library with meaningfully named methods, functions, macros, call them what you will

I think this is a fair point, and a valid one, but I’m not really sure it’s too important in the long run.

To me – at least from what I’ve read and seen examples of – a DSL is about making it easier to write code that can be expressed re-using existing domain language in a natural way, reducing the cost of translating between developers and domain experts.

And that’s the key (at least what strikes me as the key attractiveness in DSLs) – the ability to re-use existing domain language, and better (and more naturally) talk about domain concepts. Make it easier to take those discussions and turn it into code, make it easier to discuss that code with experts and ultimately evolve a system that can handle domain complexity for future development.

Whether it’s implemented through fluent interfaces, or other ‘good’ design to my mind isn’t so important (perhaps over-hyping is a concern), but, fundamentally it’s all about domain communication. Aiming to produce a DSL seems an irrelevant pursuit if it’s not to better enable the handling of the domain, and ultimately make it easier to be more effective as a developer.

For instance, my personal opinion is that Rails’ ActiveRecord goes quite a lot of the way to providing a language that is natural for modeling data and it’s relationships. Such as:

class Order < ActiveRecord::Base  has_many :items, :dependent => trueend

It’s still Ruby, and there’s no getting away from using classes and other constructs to code with. But, it does provide language that is natural to the domain that make it easier to work with.

Contrast that to trying to explain the model through foreign key relationships and the like. That’s not to say it’s not important to sometimes talk with that lower-level language, just that it’s easier to work in the higher level aspects with it abstracted away.

I think George hits it perfectly when he says the following

I can see how, say, a shopping cart web application could be defined as existing under a domain. So, the shopping-cart-domain is a sub-domain of the web-domain, I hear you say. So, Rails as a DSL can have many sub-DSLs.

To me, that’s where Rails does hit it in the internal DSL candidate stakes – in the use of small language-like niceties for some of it’s building-blocks, such as ActiveRecord.

From a development effectiveness point-of-view I can see DSLs (and similar things) being very useful. People way more intelligent than I am are very keen on them. For me, it’s about raising the importance of the domain language, and the increasing importance of communication in an iterative development world.

I had way, way too many flashbacks to bad use of language, ignorance of domain concepts, and complicated ways of avoiding domain complexity during previous projects pre-TW whilst I was reading Domain Driven Design’s war stories to not feel like that’s the key to DSLs – re-use of domain language, encouraging the exploration of domain language, and stressing the importance of communication. It’s about being more effective with domain complexity.

Typo Trackback Spam

As some will have noticed, I am now aggregated by the ThoughtBlogs service… yay!

However, this (along with being linked to in other places) has meant that I’ve started receiving more spam TrackBacks than I can handle.

So, I started tapping away inside Typo’s administrative console to remove TrackBacks from every post, since none of them are valid.

Of course, this took far longer than was sensible so as a temporary fix, I removed them as follows (using Rails’ nifty `script/console`):

$ script/consoleLoading development environment.>> Trackback.find_all.length=> 89>> Trackback.find_all.each {|t| t.destroy}...>> Trackback.find_all.length=> 0

Ah, much better.

Guess I’ll just have to keep an eye on things and clean as I go.

I’m also checking out the latest trunk revision for Typo (that according to Scott Laird) is nearing it’s 4.0 release which I look forward to using, most importantly, because a nasty memory leak that has apparently been quashed.

MarsEdit Impressions

I’ve always been happy enough using the in-built editors in most blogging apps I’ve used. Typo is the one I’ve stuck with for longest (and thus used most), particularly because I like it’s ability to give you a live preview of your post which appears alongside your markup, making for a very nice editing interface.

However, part of my being a ThoughtWorker now involves me traveling and staying in hotels, and I’ve occasionally found myself without an Internet connection and unable to use the online web-based editor.

Of course, on a couple of these occasions I’ve started editing with Word (on my company issue Windows laptop), or Pages on my PowerBook. However, when doing that I tend to miss formatting errors and there’s also something mental – I always end up leaving the posts on my hard drive rather than posted, as I tend to forget to revise and edit further, and just consign them to the ‘no longer relevant’ category. Mind you, perhaps that’s a good thing? :)

So, I plunked down the £14ish for MarsEdit (having read good things about it, and other people mentioning they use it). You’re currently looking at the first post from it, which, ironically I’m writing because in my hotel room I can’t pick up a Wi-Fi signal – but I can do that in the bar downstairs :)

So far, it seems pretty good value – simple, and seems to be designed for it’s purpose (which is why I also love NewsFire, indeed I purchased it over NetNewsWire because of it’s simplicity for purpose). So, here’s my thoughts so far:

Good Points

  • Simple purposeful design, it’s not mixing metaphors and trying to do too much. It shows me posts I’ve made, and let me view them, and also see draft posts as I write them.

  • Live preview of markdown formatted post is very helpful. It’s much snappier writing posts inside a local app than via. the web interface.

  • The fact that I’m writing draft posts, rather than writing posts in documents, should (hopefully) mean I’m more inclined to post rather than just let stuff sit around on the hard disk.

  • It’s preview is much easier to see, so I find myself using it more as a place to review and revise my post, far more so than the live preview in Typo.


  • I’m not so keen on the separate window for the preview. I’d prefer to see it perhaps as a separate drawer, or another pane on the right side of this editing window. It just feels that’s how it should be.

  • Why can I insert HTML tags or Custom tags, but there are no tags for the formatting I specify? Would be nicer to set the formatting on the post (rather than the preview), and then have a local markup language reference.

  • Let me save a draft to the server. In the same way I can write a draft using Mail and it saves it to my drafts folder (so i can come back to it on another client later), so when revising a draft I can do it without having to always using MarsEdit. I’m not sure whether this is a limitation of MarsEdit in particular, or whether blog editor clients just can’t do that kinda thing?

Things I’m currently not sure of

  • Is it possible to enter tags for my posts via this editor? I can select categories through the options drawer, but I’m not so sure how tags get in there. Maybe I should check out the Typo code to see.

  • It does include the ability to set an HTML template for the preview which looks pretty nice, so I can style my preview as I would the real post. Not sure how much I’ll use it (I like the clean layout as-is, just with regular headings, bullet points etc.), but maybe in the future.

  • Custom Tags lets me enter markup fragments that get inserted into my post. Although this makes it short-handish, would be even nicer if these had short-cuts I could assign. It’s a bit of a pain having to use the mouse to navigate and take my focus out of my writing.


So far I’d say it’s proving to do everything I want it to do, and in a pretty nice way. Hopefully it’ll prove it’s worth and I’ll get back into some posting again, I’m really enjoying doing some interesting stuff with my current ThoughtWorks project.

Domain Driven Design

Right away you can incorporate the language of the code into your discussions with experts. In natural language, it’s equivalent to going from saying “well, to store this we’ll add an entry that tells us that a category belongs to a user and the user can have many news categories to tell us what they want to be told about” to saying “when someone wants to get News from our various Categories, we
create them a Subscription”.

Ok, maybe I’m exaggerating or overstating it a bit, but that’s the heart of the matter – language and communication. At the root of it is a focus on writing code for humans rather than machines. Coding in a way that can maximise the benefit you get from interacting with the domain experts.

Unfortunately, I hadn’t come across Evans’ excellent book Domain Driven Design at my last place, otherwise I think I would’ve made some different decisions and definitely framed my work better. However, on my current Java project with ThoughtWorks we have made efforts to focus peoples minds on the domain and to try and leverage the domain experts’ understanding as best as possible, and synchronise our code with it.

Last week I was pairing with a client developer and we performed a little model refactoring, naming concepts and relationships consistently with the notes and spontaneous diagram drawing we’d done during some quick domain model/discussion meetings.

The refactoring went pretty well (despite it throwing up some interesting issues with the introduction of a `Money` value object and resulting IBM `BigDecimal` and trying to store the aggregate within our Hibernate mapping).

We tried originally to use our own `CompositeUserType` but this turned out to be more work than it was worth and we took a hit on the DRYness and used Hibernate’s component mapping which actually turned out pretty good. The result is that we’ll have to define entries for our `Money` in each class that contains a `Money` value, but the value is it also stores the value inside the containing class which is good.

Ultimately, I’m more mindful now of how meaning is conveyed through the code and how it’s possible to incorporate natural domain language into the naming of concepts, and the introduction of new objects and classes where there’s value in the communicativeness. Of course, it seems rather common sensical but as always it’s easy to see how you just pass the obvious by (or at least I certainly have been vulnerable to it in the past).

It’s important that this kind of process is continual and evolutionary. Refactor as you go based on the knowledge you have. As the domain model changes, or your understanding deepens – change the code to follow suit. It’s amazing how much easier the resulting code is to talk about and build on.

Anything that makes it easy to write code in a manner such as this gets my vote. I’m looking forward to really learning Ruby and finding out how I can wrap it in such a way to write better code. After all, Rails provides some pretty nice language-looking bits for handling database mapping, web application handling etc. DSLs anyone? :)

Starting at ThoughtWorks and Domain Driven Design

So, where to begin really. Well, in a word it’s been awesome. I’ve never enjoyed joining a Company as much as ThoughtWorks, and have never enjoyed a project as much as the one I’m working on now.

One of the things I’ve really enjoyed has been the application of Domain Driven Design, and refinement of the ubiquitous language - the language of the domain (and consequently the language that’s used in both model and code). I can’t recommend Eric Evans’ Domain Driven Design book highly enough (despite it’s wordiness) for both high-level conceptual thought-provoking-ness and tips for implementation. Jeff Santini (our Iteration Manager) also remarked that it includes an example of one of the best examples of refactoring in any book (with regards to it’s shipping example) - showing how the code more clearly communicates domain understanding.

I’m over trivialising the first part of the book, but a lot of it’s focus is on the use of language, and aiming to keep the language the same between the domain experts, and developers. Incorporating domain objects in the discussions between developers and domain experts. The aim being that over time a ubiquitous language emerges, and the language of the domain is continuously refactored into the code. Ultimately, developers working on code (legacy or otherwise) can learn about the domain as they go. And through discussions with customers, can apply their changes and wishes more easily.

As the website says:

“the most significant complexity of many applications is not technical. It is in the domain itself, the activity or business of the user. When this domain complexity is not dealt with in the design, it won’t matter that the infrastructural technology is well-conceived. A successful design must systematically deal with this central aspect of the software.”

To this end, the book provides some suggestions on how to organise your domain objects, and encourages the use of the ubiquitous language in the naming of classes, and packages, so that everything becomes a valuable expression of the domain.

For instance, I can think of a previous project in my past role where we never captured the domain language, introduing our own names and inventing objects and then forcing them on customers. The result was that we couldn’t converse without attempting to translate, and that introduced inaccuracies and misplaced assumptions. The result was a messy, confusing and difficult to work with codebase. If we’d have dug a little deeper, and collaborated a little more, perhaps we could have ended up with a better model and more communicative code.

I’m yet to break 2/3rds of the way through the book, so there’s tons of useful stuff still waiting (so I’m told), but even so far I’d say it’s been invaluable in helping me approach this kind of business application.

There’s also some great advice about structuring your code, which I’m going to try and use to help me re-organise some of my pet Rails code. I’ll try and keep some notes and post here with updates on how it goes, see if it looks better and whether others can suggest further refactorings. But I digress!

Overall, the move has been one I’ve been very happy with. There’s no doubt I was nervous about starting, but people couldn’t have made me feel more welcome, and working with the other guys and girls on the team has been thoroughly enjoyable!

Adding Lightbox JS support to Typo Theme

Firstly, you’ll need to edit the `default.rhtml` file inside the theme’s `layouts` folder to include the following lines inside the `head` tag:

<%= javascript_include_tag "typo" %><%= javascript_include_tag "lightbox" %><%= stylesheet_link_tag "/stylesheets/lightbox.css" %>

And that’s all there is to it. Then you can use the `` tag to include an image, such as:

Which will result in the following being displayed (from a Flickr picture I took earlier today that’s in my Pro account).

Click on the image and it will display the image using the Lightbox overlay. Neat.

Mind you, one of the nice things about links into Flickr is it lets you then navigate to other pictures in the same set etc. Maybe there’s a way I can achieve this by modifying Lightbox’s CSS/JavaScript, perhaps a project for tomorrow!