Of Is and As (Operators in C#)

It’s been a while since I made any .NET posts, but here’s one that’s been bubbling up for too long!

I have to confess to being a little bit ‘militant’ to certain things I see in code. Blindly catching `Exception` is one thing that gives me shivers, using the `as` operator everywhere is another.

The issue I take is one of code semantics and intent, and consequently readability. For example, take the following piece of C# code:

Person personPaul = ...;Man paul = (Man)personPaul;

When I read the above, I take it to read that I fully expect to be able to treat `personPaul` as a `Man`, that `personPaul` categorically can be cast successfully.

However, the following is subtly different:

Man paul = personPaul as Man;

This statement uses the `as` operator which attempts to perform the specified cast, but, if it fails instead of throwing an `InvalidCastException`, it will return `null`. In the example above, the use of `as` implies a statement along the lines of “I’d like to treat `personPaul` as a man, but if I can’t, I’m not too worried”. If we know that an object is of type `T`, I don’t see the need to use a cast that handles a situation when it isn’t.

Where the `as` operator is intended for use is in situations to avoid the expensive CLR checks associated with casting in a code pattern such as this:

if (personPaul is Man) {    Man paul = (Man)personPaul;}

In this situation we’re actually performing two casting operations:

L_0000: ldarg.0 L_0001: isinst ManL_0006: brfalse.s L_002fL_0008: ldarg.0 L_0009: castclass Man

Notice the calls to `isinst` and `castclass`—in both cases the CLR will perform a check to determine whether the object in question is cast-able. Since we definitely know by line `L_0009` we’re looking at a `Man` (since we’ve already performed one cast operation on `IL_0001` that returns us from the routine if it fails), there’s no need for the CLR to perform an additional check.

In these situations, it’s much better to use the `as` operator as mentioned earlier, which ultimately results in something similar to:

L_0000: ldarg.0 L_0001: isinst ManL_0006: stloc.0 L_0007: ldloc.0 L_0008: brfalse.s L_002cL_000a: ldloc.0 L_000b: call void Test::DoSomething(man)

In this case we’re only doing the cast once, reducing the number of expensive CLR type safety checks. Admittedly, there are far more things that could cause your code to slow down but the key benefit for me is that it makes the code easier to read and any invalid operations will result in a far more meaningful `InvalidCastException`, rather than a `NullReferenceException`.

Incidentally, one other important note that I stumbled across is that casting using the `as` operator will not use any conversions that you specify, that would normally be performed when using casting syntax.

Of Is and As (Operators in C#)

It’s been a while since I made any .NET posts, but here’s one that’s been bubbling up for too long!

I have to confess to being a little bit ‘militant’ to certain things I see in code. Blindly catching `Exception` is one thing that gives me shivers, using the `as` operator everywhere is another.

The issue I take is one of code semantics and intent, and consequently readability. For example, take the following piece of C# code:

Person personPaul = ...;Man paul = (Man)personPaul;

When I read the above, I take it to read that I fully expect to be able to treat `personPaul` as a `Man`, that `personPaul` categorically can be cast successfully.

However, the following is subtly different:

Man paul = personPaul as Man;

This statement uses the `as` operator which attempts to perform the specified cast, but, if it fails instead of throwing an `InvalidCastException`, it will return `null`. In the example above, the use of `as` implies a statement along the lines of “I’d like to treat `personPaul` as a man, but if I can’t, I’m not too worried”. If we know that an object is of type `T`, I don’t see the need to use a cast that handles a situation when it isn’t.

Where the `as` operator is intended for use is in situations to avoid the expensive CLR checks associated with casting in a code pattern such as this:

if (personPaul is Man) {    Man paul = (Man)personPaul;}

In this situation we’re actually performing two casting operations:

L_0000: ldarg.0 L_0001: isinst ManL_0006: brfalse.s L_002fL_0008: ldarg.0 L_0009: castclass Man

Notice the calls to `isinst` and `castclass`—in both cases the CLR will perform a check to determine whether the object in question is cast-able. Since we definitely know by line `L_0009` we’re looking at a `Man` (since we’ve already performed one cast operation on `IL_0001` that returns us from the routine if it fails), there’s no need for the CLR to perform an additional check.

In these situations, it’s much better to use the `as` operator as mentioned earlier, which ultimately results in something similar to:

L_0000: ldarg.0 L_0001: isinst ManL_0006: stloc.0 L_0007: ldloc.0 L_0008: brfalse.s L_002cL_000a: ldloc.0 L_000b: call void Test::DoSomething(man)

In this case we’re only doing the cast once, reducing the number of expensive CLR type safety checks. Admittedly, there are far more things that could cause your code to slow down but the key benefit for me is that it makes the code easier to read and any invalid operations will result in a far more meaningful `InvalidCastException`, rather than a `NullReferenceException`.

Incidentally, one other important note that I stumbled across is that casting using the `as` operator will not use any conversions that you specify, that would normally be performed when using casting syntax.

Access to my CodeProject code in Subversion

It’s been a while since I last posted, and it’s been even longer since I posted about anything .NET-related—I’ve been knee deep in freelancing work playing with Ruby on Rails. But, after an email from someone using some of my old CodeProject code I figured it was time to get my stuff in order!

A while ago I posted that I was in the process of moving some projects of mine into open source, and away from their current home on CodeProject.

I suppose it’s not really a move into open source as such, but a move into an environment that makes maintenance/updates to the code as easy as possible.

At present, all articles require me to submit both HTML and code updates to editors, upon which changes are updated and new zip archives made available. This is a relatively longwinded process and makes quick changes rather painful.

So instead, all code will be hosted in a brand new Subversion repository I will host. Instead of releasing zip files, all users can grab the latest source by checking out from the repository.

All repositories follow the standard trunk/branches layout. For users that aren’t familiar, the trunk contains the development mainline—that is the stable code that you should be using. Branches will be used sparingly for larger breaking changes, until they are in a state to be merged. There’s no reason you can’t use the code from one of these too, but bear in mind it’s probably a little more rough and ready.

So, for those that aren’t familiar with using Subversion, here’s how you go about using it.

1. Download a client
2. Checkout the mainline branch for your projects, using the following:

svn checkout http://url/project/trunk project

That’ll put a copy of the trunk into the project directory (the last parameter). That ought to be all you need to do. However, if you find you make improvements/fixes it’d be nice if you contributed those.

To do so, you’ll need to run

svn diff > my_changes_patch.txt

That’ll generate a file containing enough information for another user to apply the changes outside of the usual repository. This is to ensure that the repository isn’t open to abuse from all, all changes should instead be emailed to me (please note you’ll need to update my email address there). Provided they look ok I’ll stick them into the trunk for everyone else to then update via running

svn update

So far I’ve added the following projects:

I will consider adding automated binary builds for the stable releases, but this isn’t a priority at the moment so I’m afraid it’s just the repositories.

Note: It’s taken me a little longer than expected to get things going, so I’ll be adding the remaining projects as soon as I can, until then, feel free to grab away! One of the things I’d like to add is some kind of unit and functional testing, the gauntlet has been set!

Access to my CodeProject code in Subversion

It’s been a while since I last posted, and it’s been even longer since I posted about anything .NET-related—I’ve been knee deep in freelancing work playing with Ruby on Rails. But, after an email from someone using some of my old CodeProject code I figured it was time to get my stuff in order!

A while ago I posted that I was in the process of moving some projects of mine into open source, and away from their current home on CodeProject.

I suppose it’s not really a move into open source as such, but a move into an environment that makes maintenance/updates to the code as easy as possible.

At present, all articles require me to submit both HTML and code updates to editors, upon which changes are updated and new zip archives made available. This is a relatively longwinded process and makes quick changes rather painful.

So instead, all code will be hosted in a brand new Subversion repository I will host. Instead of releasing zip files, all users can grab the latest source by checking out from the repository.

All repositories follow the standard trunk/branches layout. For users that aren’t familiar, the trunk contains the development mainline—that is the stable code that you should be using. Branches will be used sparingly for larger breaking changes, until they are in a state to be merged. There’s no reason you can’t use the code from one of these too, but bear in mind it’s probably a little more rough and ready.

So, for those that aren’t familiar with using Subversion, here’s how you go about using it.

1. Download a client
2. Checkout the mainline branch for your projects, using the following:

svn checkout http://url/project/trunk project

That’ll put a copy of the trunk into the project directory (the last parameter). That ought to be all you need to do. However, if you find you make improvements/fixes it’d be nice if you contributed those.

To do so, you’ll need to run

svn diff > my_changes_patch.txt

That’ll generate a file containing enough information for another user to apply the changes outside of the usual repository. This is to ensure that the repository isn’t open to abuse from all, all changes should instead be emailed to me (please note you’ll need to update my email address there). Provided they look ok I’ll stick them into the trunk for everyone else to then update via running

svn update

So far I’ve added the following projects:

I will consider adding automated binary builds for the stable releases, but this isn’t a priority at the moment so I’m afraid it’s just the repositories.

Note: It’s taken me a little longer than expected to get things going, so I’ll be adding the remaining projects as soon as I can, until then, feel free to grab away! One of the things I’d like to add is some kind of unit and functional testing, the gauntlet has been set!

Loving Rails' functional testing

The one thing I’ve never quite managed to get the hang of in ASP.NET development has been writing functional tests. Those that can exercise a control within it’s GUI environment. I’ve used a couple of tools, to various depths and with varying success.

For instance, with NUnitAsp it’s possible to write the functional tests directly within Visual Studio. This to me shortens the feedback loop since I don’t need to go away and author elsewhere—it fits in nicely with the “test > fail > code > pass” TDD process I use elsewhere. However, it’s not particularly quick to get up and running and it depends on you having a webserver up and running (although I’ve seen some neat stuff to improve some of this). However, you’re still at the whim of ASP.NET’s lifecycle and core framework constraints that make testing outside of a real Page container and away from an HttpContext nigh on impossible (from what I’ve seen and tried anyway).

In contrast, functional testing is supported directly in Rails - indeed it’s actively encouraged - when you generate a Controller class, it automagically creates you a mytype_controller_test.rb stub that you can get to work in straight away.

However, the real magic (and joy) comes from actually writing the tests. For instance, take a look at this code from one of the tests I have for a login controller:

def test_login_with_valid_user  post :login, :user => {:name => "paul", :password => "p4ssw0rd"}end

assert_response :successassert_not_nil session[:user_id]

This posts to the :login action on the controller, and passes into it’s parameters a constructed user. It then checks that the response is a success (HTTP response code 200) and that there is a :user_id set in the session.

To support this, I do the following inside the controller’s login action:

@user = User.new(params[:user])logged_in_user = @user.try_to_login

if logged_in_user  session[:user_id] = logged_in_user.idend

Now, that is good in itself - I’m testing that the user is stored in the session as a result of a successful login. However, I also want to test the GUI - that is that an error message is displayed following an invalid login. And this is where it gets really beautiful:

def test_login_with_invalid_user  post :login, :user => {:name => "baduser", :password => "p4ssw0rd"}end

assert_equal "Invalid user/password", flash[:notice]assert_tag :tag => "div",  :attributes => {    :id => "flashNotice"   },  :child => {    :content => "Invalid user/password"   }

In this test, I want to make sure that when I submit the form there is a div that has the id flashNotice and inside has some inline content informing the user of the failure. The thing that’s really striking is just how neat and tidy it is.

A while ago we were fortunate to have Charlie Poole working with us to help improve our agile ways. During that time (since we do a lot of web work) acceptance/customer testing was a big concern. Previously we’d used a customised version of the Exactor framework. Although this worked, it was essentially

ClickButton "Really_long_aspnet_id" SetText "Another_long_id" ...

These “commands” mapped onto C# classes, that looked something like

public class ClickButton : WebCommand {  public override void Execute() {    ...  }}

We ended up having various commands for looking for buttons by id, by label etc. This then repeated itself across different tags. The result was a lot of duplication—and something that Charlie helped us to improve. We made changes to end up with a kind of composite tag matcher that could let us chain together criteria to find controls, also alleviating problems with the long (and somewhat brittle) control IDs that ASP.NET generates. However, we still ended up having to write a hell of a lot of code to do essentially a simple job.

The end result was a pretty comprehensive framework for testing Web GUIs, but, it needed a lot of effort to write tests in. Crucially, it didn’t really lend itself to test first in the first instance, and then maintenance became difficult. Ultimately we (shamefully) started to skimp on tests.

One crucial difference of note is that with Exactor we were after tests that non-developers could write, or that our customer team could write with customers, so they are for different purposes.

I’ve only been using Ruby and Rails for a couple of weeks now, but this alone is a major win for me (and one I’ve only just started to get to grips with). I can construct high-level functional tests that can test both under and through the GUI, testing that the controllers and views do their thing with data and/or presentation, all from within an elegant test fixture. That’s really neat!

Lighty config

After attempting to get URL rewriting in lighttpd and failing miserably, I’m going to run lighttpd through Apache 2’s mod_proxy. At present, requests for www.oobaloo.co.uk are being served through lighttpd. It should also make it easier to keep separate site instances running.

At first I tried using mod_proxy to redirect all requests to lighty, and then rewrite the URLs that included the blog directory to remove it. I tried the following in my lighttpd.conf configuration file:

url.rewrite-once = ("^/blog(.*)$" => "$1")

To rewrite any /blog/blah to /blah. Unfortunately, this just wasn’t working.

In the end I’ve settled on the following—separate lighttpd instances will bind on high ports, and Apache will then handle the proxying to the lighty backend.

I have the following entries in my Apache configuration for the virtual host:

ProxyPass /blog/ http://127.0.0.1:8001/ProxyPass / http://127.0.0.1:8001/ProxyPassReverse /blog/ http://127.0.0.1:8001/ProxyPassReverse / http://127.0.0.1:8001/

This results in the same effect - existing URLs work fine, and the new site also works as expected, and I don’t need to make any changes to my DNS zone - awesome!

Lighty config

After attempting to get URL rewriting in lighttpd and failing miserably, I’m going to run lighttpd through Apache 2’s mod_proxy. At present, requests for www.oobaloo.co.uk are being served through lighttpd. It should also make it easier to keep separate site instances running.

At first I tried using mod_proxy to redirect all requests to lighty, and then rewrite the URLs that included the blog directory to remove it. I tried the following in my lighttpd.conf configuration file:

url.rewrite-once = ("^/blog(.*)$" => "$1")

To rewrite any /blog/blah to /blah. Unfortunately, this just wasn’t working.

In the end I’ve settled on the following—separate lighttpd instances will bind on high ports, and Apache will then handle the proxying to the lighty backend.

I have the following entries in my Apache configuration for the virtual host:

ProxyPass /blog/ http://127.0.0.1:8001/ProxyPass / http://127.0.0.1:8001/ProxyPassReverse /blog/ http://127.0.0.1:8001/ProxyPassReverse / http://127.0.0.1:8001/

This results in the same effect - existing URLs work fine, and the new site also works as expected, and I don’t need to make any changes to my DNS zone - awesome!

I'm on lighty!

I’m now getting to the point where I can release an early iteration of the Sureboss shopping cart into production. So it’s only natural I started to consider the production environment a little more.

I’ve done some reading, and although I’m currently running Apache 2.x and FCGI, I’ve read a couple of things that hint that it’s perhaps a little unstable:

Since I’m looking at having at least 2 applications in production, both of which are critical (one of which extremely critical—it’ll probably have to end up on it’s own VPS), I decided it’d be safe to just use lighttpd, especially since the only dynamic content is generated through Rails.

I followed James Duncan Davidson’s setting up lighty guide and was up and running in almost no time. The only slight tweak was to ensure that it loaded automatically when the machine booted—just in case the host or VPS goes down. Fortunately I found just the startup script I needed, and then just had to setup some symbolic links to point to it from inside the other init directories:

1. rc0.d
2. rc3.d
3. rc6.d

This ensures that the daemon shuts down and starts up correctly. I’m not enough of a *nix guru to know why I couldn’t use chkconfig on my RHEL4-based VPS to set it up but this seems to work fine!

At the moment the DNS record for this blog still resolves to the Apache 2 service, but, I’ll organise changing it over as soon as possible. In the meantime, this blog running on lighttpd is currently on http://www.oobaloo.co.uk:8080 should you prefer to use that.

All I need to do now is setup some virtual hosting on lighttpd for the other domains.

I'm on lighty!

I’m now getting to the point where I can release an early iteration of the Sureboss shopping cart into production. So it’s only natural I started to consider the production environment a little more.

I’ve done some reading, and although I’m currently running Apache 2.x and FCGI, I’ve read a couple of things that hint that it’s perhaps a little unstable:

Since I’m looking at having at least 2 applications in production, both of which are critical (one of which extremely critical—it’ll probably have to end up on it’s own VPS), I decided it’d be safe to just use lighttpd, especially since the only dynamic content is generated through Rails.

I followed James Duncan Davidson’s setting up lighty guide and was up and running in almost no time. The only slight tweak was to ensure that it loaded automatically when the machine booted—just in case the host or VPS goes down. Fortunately I found just the startup script I needed, and then just had to setup some symbolic links to point to it from inside the other init directories:

1. rc0.d
2. rc3.d
3. rc6.d

This ensures that the daemon shuts down and starts up correctly. I’m not enough of a *nix guru to know why I couldn’t use chkconfig on my RHEL4-based VPS to set it up but this seems to work fine!

At the moment the DNS record for this blog still resolves to the Apache 2 service, but, I’ll organise changing it over as soon as possible. In the meantime, this blog running on lighttpd is currently on http://www.oobaloo.co.uk:8080 should you prefer to use that.

All I need to do now is setup some virtual hosting on lighttpd for the other domains.

Problems with typo and del.icio.us

Unfortunately I’ve had to remove the del.icio.us item from the sidebar. It appears that there’s some bad timeout values set in typo for retrieval of del.icio.us content – longer than the FastCGI process timeout.

This was resulting in the server becoming unstable, and causing machine load to spike which is no good, so until I can either fix it myself, or upgrade to the latest trunk of typo it’ll be switched off. You can still access it via the web or an RSS feed if you want though :)