All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see for the full original

Broken windows and unit testing

There’s quite possibly only one person in the world reading this blog who doesn’t think
it’s got anything to do with Vista. The windows in the title have nothing to do with
Microsoft, and I’m making no assertions whatsoever about how much unit testing gets done

The one person who understands the title without reading the article is Stuart,
who lent me The Tipping Point
before callously leaving for ThoughtWorks,
a move which has signficantly reduced my fun at work, with the slight compensation
that my fashionable stripy linen trousers don’t get mocked quite as much. The Tipping
Point is a marvellous book, particularly relevant for anyone interested in cultural
change and how to bring it about. I’m not going to go into too much detail about the
main premises of the book, but there are two examples which are fascinating in and
of themselves and show a possible path for anyone battling with introducing
agile development practices (and unit testing in particular) into an existing
environment and codebase.

The first example is of a very straightforward study: look at unused buildings, and
how the number of broken windows varies over time, depending on what is done with
them. It turns out that a building with no broken windows stays “pristine” for a
long time, but that when just a few windows have been broken, many more are likely
to be broken in a short space of time, as if the actions of the initial vandals
give permission to other people to break more windows.

The second example is of subway trains in New York, and how an appalling level
of graffiti on them in the 80s was vastly reduced in the 90s. Rather than trying
to tackle the whole problem in one go by throwing vast resources at the system,
or by making all the trains moderately clean, just a few trains were selected
to start with. Once they had been cleaned up, they were never allowed to run
if they had graffiti on them. Furthermore, the train operators noticed a pattern
in terms of how long it would take the “artists” in question to apply the graffiti,
and they waited until three nights’ work had been put in before cleaning the
paint off. Having transformed one set of trains, those trains were easier to keep
clean due to the “broken windows” effect above and the demotivating aspects of
the cleaning. It was then possible to move onto the next set, get them clean
and “stable”, then move on again.

I’m sure my readership (pretentious, eh?) is bright enough to see where this is
leading in terms of unit testing, but this would be a fairly pointless post if I
stopped there. Here are some guidelines I’ve found to be helpful in “test infecting” code,
encouraging good practice from those who might otherwise be sloppy (including myself)
and keeping code clean once it’s been straightened out in the first place. None of
them are original, but I believe the examples from The Tipping Point cast them in
a slightly different light.

Test what you work with

If you need to make a change in legacy code (i.e. code without tests), write
tests for the existing functionality first. You don’t need to test all of it,
but do your best to test any code near the points you’ll be changing. If you
can’t test what’s already there because it’s a
Big Ball of Mud
then refactor it very carefully until you can test it. Don’t start
adding the features you need until you’ve put the tests in for the refactored
functionality, however tempting it may be.

Anyone who later comes to work on the code should be aware that there are
unit tests around it, and they’re much more likely to add their own for whatever
they’re doing than they would be if they were having to put it under test
for the first time themselves.

Refactor aggressively

Once code is under test, even the nastiest messes can gradually get under
control, usually. If that weren’t the case, refactoring wouldn’t be much use,
as we tend to write terrible code when we first try. (At least, I do. I haven’t
seen much evidence of developers whose sense of design is so natural that
elegance flows from their fingers straight into the computer without at
least a certain amount of faffing. Even if they got it right for the current
situation, the solution isn’t likely to look nearly as elegant in a month’s
time when the requirements have changed.)

If people have to modify code which is hard to work with, they’ll tend to
add just enough code to do what they want, holding their nose while they do it.
That’s likely to just add to the problem in the long run. If you’ve refactored
to a sane design to start with, contributing new elegant code (after a couple
of attempts) is not too daunting a task.

Don’t tinker with no purpose

This almost goes against the point above, but not quite. If you don’t need
to work in an area, it’s not worth tinkering with it. Unless someone (preferrably
you) will actually benefit from the refactoring, you’re only likely to provoke
negative feelings from colleagues if you start messing around. I had a situation
like this recently, where I could see a redundant class. It would have taken maybe
half an hour to remove it, and the change would have been very safe. However,
I wasn’t really using the class directly. Not performing the refactoring didn’t
hurt the testing or implementation of the classes I was actually changing, nor was it
likely to do so in the short term. I was quite ready to start tinkering anyway,
until a colleague pointed out the futility of it. Instead, I added a comment suggesting
that the class could go away, so that whoever really does end up in that area
next at least has something to think about right from the start. This is as much
about community as any technical merit – instead of giving the impression that
anything I had my own personal “not invented here” syndrome (and not enough “real work”
to do), the comment will hopefully provoke further thought into the design decisions
involved, which may affect not just that area of code but others that colleagues work
on. Good-will and respect from colleagues can be hard won and easily lost, especially
if you’re as arrogant as I can sometimes be.

Don’t value consistency too highly

The other day I was working on some code which was basically using the wrong naming
convention – the C# convention in Java code. No harm was being done, except everything
looked a bit odd in the Java context. Now, in order to refactor some other code towards
proper encapsulation, I needed to add a method in the class with the badly named methods.
Initially, I decided to be consistent with the rest of the class. I was roundly (and
deservedly) told off by the code reviewer (so much for the idea of me being her mentor –
learning is pretty much always a two-way street). As she pointed out, if I added another
unconventional name, there’d be even less motivation for anyone else to get things right in
the future. Instead of being a tiny part of the solution, I’d be adding to the problem.
Now, if anyone works in that class, I hope they’ll notice the inconsistency and be encouraged
to add any extra methods with the right convention. If they’re changing the use of an existing
method, perhaps they’ll rename it to the right convention. In this way, the problem can gradually
get smaller until someone can bite the bullet and make it all consistent with the correct
convention. In this case, the broken windows story is almost reversed – it’s as if I’ve
broken a window by going against the convention of the class, hoping that all the rest of
the windows will be broken over time too.

This was a tough one for me, because I’ve always been of the view that consistency
of convention is usually more important than the merit of the convention. The key here is that
the class in question was inconsistent already – with the rest of the codebase. It was only
consistent in a very localised way. It took me longer to understand that than it should have
done – thanks Emma!


Predicting and modifying human behaviour is an important part of software engineering
which is often overlooked. It goes beyond the normal “office politics” of jockeying for
position – a lot of this is just as valid when working solo on personal code. Part of
it is a matter of making the right thing to do the easy thing to do, too. If
we can persuade people that it’s easier to write code test-first, they’ll tend to
do it. Other parts involve making people feel bad when they’re being sloppy – which
follows naturally from working hard to get a particular piece of code absolutely clean just
for one moment in time.

With the right consideration for how future developers may be affected by changes we make
today – not just in terms of functionality or even design, but in attitude, we can
help to build a brighter future for our codebases.

The 7 Deadly Sins of Software Development


Recently, Eric Gunnerson made a post
in his blog with the idea of “the seven deadly sins of
programmers”. Eric is posting his ideas for such a list one at a time, periodically. He invited others to write
their own lists, however, and it was such an intriguing idea that I couldn’t resist. The list is in descending
order of importance (I figure not everyone will make it to the bottom of this post) but the order is fairly
arbitrary anyway. Hopefully none of this will be a surprise to most of my readership, but it’s nice to write
as a sort of manifesto anyway. I’ve included a “personal guilt rating” out of 10 for each of these sins. I’m
very willing to change these in the light of feedback from those with experience of working with me :)

#1 – Overengineering (in complexity and/or performance)

Personal guilt rating: historically 8, currently 3

It’s amazing how much people care about performance. They care about the smallest things, even
if there’s not a chance that those things will have a significant impact in reality. A good example
of this is implementing a singleton. I’ve seen
the double-checked lock algorithm (as described on the page, except often broken) repeatedly brought
forward as the best way to go. There’s rarely any mention of the fact that you have to get it just right
(in terms of making the variable volatile or using explicit memory barriers) in order for it to be properly
thread-safe. There’s no way of making it work in Java, so anyone porting code later will end up with a bug
they’re very unlikely to be aware of. Yet people use it over simply locking every time on the grounds of
performance. Now, acquiring an uncontested lock is very, very cheap in .NET. Yes, it’ll be slightly more expensive
on multi-processor boxes, but it’s still mind-bogglingly quick. The time taken to lock, check a variable for nullity
and then unlock is unlikely to cause a significant issue in almost any real world application. There may be a very,
very few where it would become a problem – but developers of those applications can profile and change the code
when they’ve proved that it’s a problem. Until that time, using double-checked locking just adds complexity
for no tangible benefit.

That’s just a single example. People are always asking performance questions on the newsgroups which just won’t
make a difference. For example, if you have a string reference in an Object expression, is it faster
to cast it to String or to call the toString method? Now, don’t get me wrong – I find
it very interesting to investigate this kind of thing, but it’s only as a matter of interest – not because I think
it should affect what code should be written. Whatever the result, the simplest code which achieves
the result in the most readable fashion is the best code until performance has proved to be an issue.

I should stress that this doesn’t extend to being stupid about performance. If I’m going to concatenate an unknown
number of strings together, I’ll use StringBuilder
of course – that’s what it’s designed for, and I’ve seen that it can make a huge difference in real world situations.
That’s the key though – it’s evidence based optimisation. In this case it’s general past evidence, whereas for many other
situations I’d use application-specific evidence, applying an optimisation which may make the code harder to read only
when I’ve proved that the particular application in question is suffering to a significant extent.

The other side of this issue is complexity in terms of what the solution can achieve. These days,
I mostly code for testability, knowing that if I can test that my code does what it wants to, chances are
it’ll be flexible enough to meet my needs without being overly complicated. I look for complexity all over the place,
particularly in terms of trying to anticipate a complicated requirement without knowing that the requirement will
actually be used. For instance, if I’m writing some kind of collection (not that that happens often), I won’t add sorting capabilities
until I know they’re needed. It just creates more work if you write unnecessary code. Of course, when writing libraries for
public consumption, things become much trickier – you basically can’t tell how a library will be used ahead of time,
so you may well end up adding features which are rarely used. The closer the communication between the code and its clients,
the better. (This is also relevant in terms of performance. One area I did spend time optimising was the
enhanced locking capabilities part of my miscellaneous
utility library. Locking is cheap, so any replacement for “standard” locking should also be cheap, otherwise it discourages
its own use.)

When I look at a design and see that it’s simple, I’m happy. When I look at implementation and see that it’s simple, I’m happy.
It’s not “clever” to write complicated code. Anyone can write complicated code – particularly if they don’t check that it works.
What takes time is writing simple code which is still powerful.

#2 – Not considering the code’s readership

Personal guilt rating: 4

This is actually closely linked to the first sin, but with a more human face. We know that code is generally in
maintenance for longer than it’s under initial development. We know that companies often (not always, thankfully) put their
“top developers” (however they decide to judge that) onto feature development rather than maintenance. These make it
essential that we consider “the next guy” when writing our code. This doesn’t necessarily mean reams of documentation –
indeed, too much documentation is as bad as too little documentation, as it can make the code harder to read (if it’s
inline code documentation) or be hard to find your way around (if it’s external documents). One project I was working on
decided to extract one document describing data formats from the main system architecture document, and we found that the
extracted document became absolutely crucial to both testers and coders. That document was kept accurate, and was short enough
to be easy to follow. The document from which it was extracted was rarely used.

Of course, the simpler the code, the less documentation is required. Likewise, well-written unit tests can often express the
correct behaviour (and expected use) of a class more succinctly than lots of documentation – as well as being kept accurate

There are times when writing good documentation is very difficult indeed. Recently I wrote a class which did exactly the right
thing, and did it in an elegant manner. However, explaining what its purpose was even in person was difficult. Understanding
it from just the documentation would be almost impossible – unless the reader looked at what problem was being solved and worked
through what was required, which would lead fairly naturally to the same kind of solution. I’m not proud of this. I’m proud of the
class itself, but I don’t like finding myself stuck for words.

Sometimes, simple code (in terms of number of characters) is quite complicated. At one point on a project I had to find
whether only a single bit was set in a long. I’m no good at remembering the little tricks involved for this kind of thing,
but I’m aware they exist, and using them can be a lot more reliable than writing bit-twiddling loops to do things in a
long-winded fashion. In this case, we found the appropriate trick on the web, and included a link in the code. Without the link
(or at least a description in a comment) the code would have been effectively incomprehensible to anyone who didn’t recognise the

Code reviews definitely help readability – when they’re done properly. In some ways, they can be better than pair programming
for this, particularly if the original developer of the code doesn’t try to explain anything to the reviewer. At this point
the reviewer is able to take on the role of the first person who has to maintain the code – if anything isn’t clear just from
what’s available in terms of code and documentation (as opposed to human intervention) then that probably needs a bit of work.
Not verbally explaining what’s happening when you’re the author is incredibly difficult to do, and I’m very bad at it. It adds
time to the review, and when you’re under a lot of pressure (and who isn’t?) it can be very frustrating to watch someone
painstakingly understanding your code line by line. This is not to say that code reviews shouldn’t be the source of discussion
as well, of course – when I’m working with people I respect, I rarely come through a review without some changes to my code,
and the reverse is true too. After all, what are the chances that anyone gets something absolutely right to start with?

#3 – Assuming your code works

Personal guilt rating: historically 9, currently 3

Ever since I heard about unit testing I’ve seen the appeal, but I didn’t start to use
it regularly until 2005 when I joined Clearswift and met Stuart. Until then, I hadn’t heard about mock objects
which I find absolutely crucial in unit testing. I had read articles and seen examples of unit tests, but everything seemed to fall
apart when I tried to write my own – everything seemed to need something else. Of course, taken to extremes this is often a fault
of the code itself, where some classes require a complete system to be up and running before they can do anything. Unit testing
such a beast is difficult to say the least. In other situations you merely need to be able to specify a collaborator, which is where
mock objects come in.

So, since finding out about mock objects I’ve been more and more keen on unit testing. I know it’s not a silver bullet – I know that more
testing is required, both at an integration level between components, and at a system level, sometimes including manual tests. However,
once I started unit testing regularly, I got a sense of just how often code which I’d assumed would work simply wouldn’t. I’ve seen examples
of code which could never have possibly worked, with any input – so they can’t possibly have been used, or the results weren’t actually
important in the first place.

These days, I get frustrated when I either have to work with code which isn’t under test and can’t be easily put under
test due to the design (see point 7 on Eric’s list, excessive coupling) or when I’m writing code which is necessarily difficult to test.
Obviously I try to minimise the amount of the code which really can’t be tested – but sometimes it’s a significant amount. Urgh.

I currently don’t have many tests around my Miscellaneous Utility Library. I’ve
resolved to add tests for any new features I add or bugs I find though. Someone mailed me about a bug within the Utf32String
class. In the process of writing a unit test to demonstrate that bug I found at least two or three others – and that wasn’t even
trying to exercise the whole code, just enough to get to the original bug. I only mention this as a defence against the “I don’t need
unit tests – my code doesn’t have bugs in it” mentality. I would really like to think that I’m a pretty good coder – but everyone mistakes.
Unit tests won’t catch all of them, but it gets an awful lot. It also acts as documentation and gives you a much better base on which to
refactor code into the right shape, of course…

#4 – Using the wrong tool for the job

Personal guilt rating: 3

“When all you have is a hammer, everything looks like a nail.” That’s the typical quote used when tackling this topic. I hope this
sin is fairly self-explanatory. If all you know is Java and C#, you’ll find it a lot harder to solve problems which are best solved
with scripts, for instance. If all you know is C, you’ll find writing a complex web app a lot harder than you would with Java or
.NET. (I know it’s doable, and I used to do it – but it was really painful.) If you’re writing a device driver, you’ll find life
sucks pretty hard if you don’t know C or C++, I suspect.

Using the wrong tool for the job can be really damaging in terms of maintenance. While bad code can often be refactored into
good code over time (with a lot of effort) there are often significant implications in changing implementation language/technology.

This is a really good reason to make sure you keep yourself educated. You don’t need to necessarily keep up to date with all
the buzzword technologies – and indeed you’d find you did nothing else if you tried to keep up with everything – but there’s
always plenty to learn. Recently I’ve been looking at
Windows PowerShell
and Groovy. Next on my list is Squeak (Smalltalk).
I’ve been promising myself that I’d learn Smalltalk for years – at a recent Scrum
training course I met yet another Smalltalk evangelist, who had come from the Java side of things. It’s got to be worth trying…

#5 – Excessive code pride

Personal guilt rating: 2

I was recently pair programming and looking at some code Stuart had written a couple of weeks before. There were various bits
I wasn’t sure about, but thought were probably a bit smelly, and I asked my pairing partner to include a lot of comments
such as // TODO: Stuart to justify this code and
// TODO: Stuart to explain why this test is useful in the slightest. It’s worth bearing in mind at this point that
Stuart is significantly senior to me. With some people, comments like this would have been a career-limiting move. Stuart,
however, is a professional. He knows that code can usually be improved – and that however hard we try, we sometimes take
our eyes off the ball. Stuart had enough pride in his code to feel a need to fix it once the flaws had been pointed out,
but not enough pride to blind him to the flaws in the first place. This appropriate level of pride is vital when you’re working
with others, in my view. I don’t mind if people change my code – assuming they improve it. I expect that to happen
at a good code review, if I haven’t been pairing. A code review which is more of a rubber stamp than anything else is just a
waste of time.

This doesn’t mean I will always agree with others, of course. If I think my design/code is better than their suggested
(or even committed!) change I’m happy to put my case robustly (and with no deference to seniority) – but usually if someone’s
put in sufficient effort to understand and want to change my code in the first place, chances are they’ll think of something
I haven’t.

Write code you can be proud of – but don’t be proud to the point of stubbornness. Be prepared to take ownership of code you write
in terms of being responsible for your own problems – but don’t try to own it in terms of keeping other people out of it.

#6 – Failing to acknowledge weaknesses

Personal guilt rating: 2

ASP.NET, JSP, SQL, Perl, COBOL, Ruby, security, encryption, UML, VB… what do they all have in common? I wouldn’t claim
to “know” any of them, even though I use some on a regular basis. They are just some of my many technological weak spots.
Ask me a question about C# or Java in terms of the languages and I’ll be fairly confident about my chances of
knowing the answer (less so with generics). For lots of other stuff, I can get by. For the rest, I would need to immediately
turn to a book or a colleague who knows the subject. It’s important to know what you know and what you don’t.

The result of believing you know more than you actually do is usually writing code which just about works, at least in
your situation, but which is unidiomatic, probably non-performant, and quite possibly fails on anything other than the
data you’ve given it.

Ask for help when you need it, and don’t be afraid to admit to not being an expert on everything. No-one is an expert
at everything, even though Don Box does a pretty good impression of such a person…

#7 – Speaking with an accent

Personal guilt rating: 6 (but conscious and deliberate in some places)

Some of the worst Java code I’ve seen has come from C++ developers who then learned Java. This code typically
brings idioms of C/C++ such as tests like if (0==x) (which is safer than if (x==0) in C as missing
out an equals sign would just cause an accidental assignement rather than a compiler error. Similarly, Java code which
assumes that double and Double mean the same thing (as they do in C#) can end up behaving
contrary to expectations.

This is related to sin #6, in terms of people’s natural reaction to their own ignorance: find something similar that
they’re not ignorant about, and hope/assume that the similarities are enough to carry them through. In a way, this can
make it a better idea to learn VB.NET if you know Java, and C# if you know VB. (There are other pros and cons, of course –
this is only one tiny aspect.)

One way of trying to make yourself think in the language you’re actually writing in rather than the similar language you’re
more familiar with is to use the naming conventions of the target language. For instance, .NET methods are conventionally
PascalCased whereas Java methods are conventionally camelCased. If you see Pascal casing in Java code, look for other C#
idioms. Likewise, if you see method_names_with_underscores in either language, look for C/C++ idioms. (The most obvious C
idiom is likely to be checking return codes instead of using exceptions.)

Naming conventions are the most obvious “tell” of an accent, but sometimes it may be worth going against them deliberately.
For instance, I like the .NET conventions of prefixing interfaces with I, and of using Pascal casing for constants instead of
JAVA_SHOUTY_CONSTANTS. It’s important when you favour this sort of “breakaway” behaviour that you consult with the rest of
the team. The default should always be the conventions of the language you’re working in, but if the whole team decides that
using parts of a different convention helps more than it reduces consistency with other libraries, that’s reasonable.

What isn’t so reasonable is breaking the coding idioms of a language – simply because they other languages’ idioms
tend not to work. For example, RAII just doesn’t work in Java, and isn’t
automatic in C# either (you can “fake it” with a using statement, but I’m not sure that really counts as RAII).
Idiomatic code tends to be easier to read (particularly for those who are genuinely familiar with the language to start with)
and less semantically troublesome than “imported” styles from other languages.


So, that’s the lot. Ask me in a year and I may well have a different list, but this seems like a good starting point.
I’ve committed every sin in here at least once, so don’t feel too guilty if you have too. If you agree with them
and are still breaking them, that’s worth feeling a bit guilty about. If you disagree with them to start with,
that’s fair enough – add a comment to say why :)

PowerShell in Action

Every so often, I review books for publishers at various times before they hit the streets (anything from initial proposal to final review). The book I’ve been reviewing most recently is PowerShell in Action. (For those of you who didn’t see the news, PowerShell is the new name for Monad, the new object-oriented shell for Windows.)

Now, I’m excited about PowerShell as a product, but I’m even more excited about the book. I’m a pretty harsh reviewer, and I can only think of about three books which I’ve reviewed and been really positive about throughout most of the review. This is the best of them. I’ve not seen the whole book yet, but from what I’ve seen it’s going to be both readable and informative, which is frankly a rare combination in technical books. The author (Bruce Payette) is on the PowerShell team, so we get the information straight from the horse’s mouth (no disrespect meant) along with reasons for design decisions. Anyway, go and have a look at the home page for the book (linked above), read the first (unedited chapter), and sign up for updates. It’s going to be fab.


Updated 7th August 2006 – It looks like closures aren’t meant to require K&R bracing after all. Hoorah! Examples changed appropriately.

One of my tasks at work is to investigate new languages and technologies and report back what
use we might make of them, where they fit in with what we’re doing, and generally what I think
of them. Obviously some of this will be specific to Clearswift,
but I’d like to make as much “insensitive” information available as possible. This post is my first
“report” as such, on Groovy.

What is Groovy?

From the Groovy home page:

Groovy is an agile dynamic language for the Java Platform with many features that inspired languages like Python, Ruby and Smalltalk, making them available to Java developers using a Java-like syntax.

That doesn’t help much if you don’t know Python,
Ruby or Smalltalk.
However, the key words (for me at least) in the above are Java and dynamic.
The Java bit is important to me because I know Java pretty well – both in terms of
the language and the standard library. It’s always nice not to have to learn yet
another way of doing the same things. (There are extra things to learn
in Groovy, but they are small in comparison with learning a platform from scratch.)
The dynamic bit is important because it’s what differentiates Groovy from Java in the first place.

Compared with, say, C and C++, Java is already pretty dynamic. It’s very easy to load classes
on the fly (it’s pretty easy to generate them, even) and reflection allows you to examine classes
at runtime. This allows for frameworks like Spring,
Hibernate and JUnit.
However, Groovy allows “dynamic typing” (an oft-contended phrase, but more later) and various
bits of what are effectively syntactic sugar to make the code terser. Most importantly from my
point of view, it offers closures – the equivalent of C# 2.0’s
anonymous methods.
(This removes the need for most inner classes in Java.) There are various other handy features too,
which generally make Groovy simpler to work with. Most of this post is effectively just a list of
features with examples and discussion.

Compiled, but scripty

Groovy is compiled to Java byte-code, but can be written as a script as well. Normally, the
whole script is compiled at start-up (as far as I can tell), although a lot of decisions are
left to run-time, so typos etc can sometimes only show up when a line is executed, even though
in a more “static” language they would have been caught at compile-time. Groovy scripts are
(commonly) executed using the groovy tool. There are also tools for running Groovy
as an interactive shell (groovysh) and a similar tool wrapped up in a GUI
(somewhat confusingly called groovyconsole). The groovyc tool is
provided to compile Groovy into bytecode to be used later rather than just run immediately.
The input to the compiler doesn’t have to be a fully-fledged class as such – it can just be a normal
Groovy script, in which case a class with an appropriate Main method is created.

It’s customary at this point to have a “Hello World!” program. As you can use Groovy like a scripting
language, it’s particularly simple:

println "Hello World!"

Saving the above to a file (e.g. test.groovy) and invoking with groovy test.groovy
gives the expected result. Things to note:

  • No class declaration, import statements etc. It’s just a script.
  • println is used instead of System.out.println. I believe this is a
    call to the println method which has been “added” to java.lang.Object.
  • No brackets and no semi-colon. You can use them – you can make them Groovy like very much like Java
    for the most part, but you don’t have to. I tend to use brackets but often omit semi-colons. You
    don’t even have to use brackets when there are multiple parameters.

As programs like the above are so convenient, I’m likely to use the features listed there in the
samples below. Other than that, I’ll attempt to only use one new feature at a time where possible,
so it’s obvious what I’m demonstrating.


In my limited experience with Groovy, closures form the single most useful feature of Groovy. They
allow you to specify some code (which may take parameters and return values) and then encapsulate that
code as an object – so you can pass it as a parameter to a method, for instance. The method could then
call the encapsulated code, and so forth. C# 2.0 provides this feature in the form of anonymous methods
(as delegate implementations) but in normal Java one would typically use an anonymous inner class, which
can end up being very ugly due to all the extra “gubbins” of specifying the superclass and then overriding
a particular method. Here’s possibly the simplest example of a closure:

Closure c = { println ("Hello closure!"); }

Giving all the details of what closures can and can’t do would take pages and pages, so I’ll just mention
a few broad points. Local variables are captured as in C#’s anonymous methods (so are writable, unlike
local variables being used in anonymous inner classes in Java), and access to private members
of the enclosing class is also permitted. Closures taking a single parameter can use the implicit parameter
name of it:

Closure printDouble = { println (it*2) }


Closures taking more than one parameter can specify their names in a sort of “introductory section”:

Closure printProduct = { x, y -> println (x*y) }

printProduct (2, 3)
printProduct (4, 5)

Finally, a very common idiom in Groovy is to make the last parameter of a method a
closure. In this case, you can call the method specifying all the other parameters normally, and then specifying
the closure parameter as code which appears to be after the method call. This takes a little while
to get used to, but is really, really handy. Here’s an example:

// Declare the method we're going to call
void executeWithProduct (int x, int y, Closure c)

// Call it with a closure that prints out the result
executeWithProduct (3, 4)
    println (it);

Groovy uses closures extensively, so they will come up out of necessity in a lot of the following examples.

“Loose typing”

Groovy doesn’t require you to specify the types of variables very often. Lots of magic happens to convert things
at the right time. Indeed, method overloading appears to be performed at run-time rather than compile-time. The
exact nature of how loose the types are is currently a mystery to me, and the specification is somewhat inadequate
in this regard. However, it’s worth looking at a few examples:

Simple hello world using loose typing (the differences when you use def are beyond the scope of this introductory article):

a = "Hello"
def b = " World!"
println (a+b)

Dynamic method overloading:

void show(String x)
    println ("string: "+x)

void show(int x)
    println ("int: "+x)

void show(x)
    println ("???: "+x)

y = "Hello"
y = 2
y = 2.5


string: Hello
int: 2
???: 2.5

String interpolation

Groovy uses the GString class (I kid you not) for string interpolation. Double-quoted
strings are compiled into instances of String or GString depending on whether
they contain any apparent interpolations, and single-quoted strings are always normal strings. (If you
need a character literal, it looks like you need to cast.) Any Groovy expression can be part of
the interpolation, which is enclosed in ${...} (like Ant properties). The braces appear
to be optional for simple expressions (the definition of which I’m not prepared to guess).

x = 10
y = "Jon"

println ('x is $x') // No interpolation with single quotes
println ("x is $x") // Simple interpolation
println ("y is ${y.toUpperCase()}") // Method call


x is $x
x is 10
y is JON

Collections: syntactic sugar and extra methods

Groovy makes working with collections easier, by providing syntax for lists and maps
within the language itself, and by using closures to make life easier. List and
map initializers both go in square brackets, with maps using a colon between a name and
a value. Also, number ranges are available as start..end. Note that
a number of common Java packages are imported by default, which is why the following
code doesn’t have to specify java.util anywhere.

List list = [0, 1, 4, 9]
Map map = ["Hello" : "There", "a" : "b"]
List range = 0..3 // Equivalent to [0, 1, 2, 3]

Indexers are provided (just like in C#) so using the above, map["Hello"] would give "There"
and list[2] would give 4. The collections also have a number of
extra methods added to them, many of them
involving closures. For instance:

list = 1..7

// Execute the closure for each element
// Output: 2, 4, 6, 8, 10, 12, 14 (on separate lines)
    println (it*2)

// Find the first element where the returned value is true
// Output: 6
println list.find 
      return it > 5

// Find all elements where the returned value is true
// Output: [6, 7]
println list.findAll
    return it > 5

// Transform each element, creating a new list
// Output: [1, 4, 9, 16, 25, 36, 49]
println list.collect
    return it*it

There are more – see the link above.


Another aspect of the JDK to be given the closure treatment is IO. Groovy makes it
really easy to read each line of a file and execute some code on the line, for example.
Here’s a program which (assuming it’s in a file called test.groovy) prints
itself out with the line numbers:

int line=1
new File ("test.groovy").eachLine 
    println "${line}: ${it}"

Enhanced switch statements

In Groovy, switch statements can have cases which are collections (including ranges; the case matches
if the switch value is in the collection), types (the case matches if the value is an instance of the type),
regular expressions, and falls back to equality otherwise. In fact, you can add your own type of case testing
by implementing an isCase method, making switch/case very flexible indeed. I haven’t tested it, but
I doubt this is nearly as efficient as the normal Java switch/case – but Groovy is about simplicity of
expression more than ultra-efficiency.

Categories – aka C# 3.0 extension methods

Groovy allows you to pretend that a class has a method you wish it had. It’s all pleasantly scoped so
you won’t do it accidentally. Here’s an example:

// Define the extra method we want
class IntegerCategory
    static boolean isEven(Integer value)
         return (value&1) == 0

// Use it - in a cleaner looking way than
// explicitly calling the static method.
use (IntegerCategory.class)
    println 2.isEven()
    println 3.isEven()

Groovy Markup

There are many times when you need to build a hierarchical structure of some kind. Groovy introduces
the idea of “builders” which help. For instance, for XML, there’s the DOMBuilder class
(along with SAXBuilder and NodeBuilder, the latter of which allows
easy XPath-like navigation). Using DOM to build XML in Java is a complete nightmare, and while
dom4j and JDOM are definite
improvements, they still don’t make it quite as easy as this. Suppose you have a map of names to ages,
and you want to build an XML document representing that information. Here’s a sample script in Groovy to
demonstrate how easy it is (using MarkupBuilder, which writes the generates XML out for you).
Elements are added just by calling a method of the same name (Groovy responds to the method call as if the
method were available normally, even though obviously it doesn’t know in advance what your element names
will be), and attributes are specified using a map in the method call. Child elements are specified
within closures.

import groovy.xml.*;

Map nameAgeMap = ["Jon": 29, "Holly": 30, "Dave": 32]

builder = MarkupBuilder.newInstance()
            entry ->
            person ("name": entry.key, "age": entry.value)


    <person name='Holly' age='30' />
    <person name='Dave' age='32' />
    <person name='Jon' age='29' />

Ant integration

I’m a big fan of Ant, but
every so often it just doesn’t let me do everything I want easily. Sometimes I want to be able to execute
some code, but I don’t want to go through the hassle of having to make sure I’ve compiled something which
is only actually going to be used by the build procedure anyway. Groovy to the rescue! You can embed
Groovy code “in-line” or call out to a Groovy script. Note that Ant allows many scripting languages (anything
supported by BSF, for starters) to be used. Groovy may be
more familiar-looking to developers who are familiar with Java but don’t know any scripting languages.
Groovy supports Ant directly in terms of providing access to the current project and properties, and the
AntBuilder class works in a similar way to the builders mentioned above, allowing Ant tasks
to be dynamically created and executed. Here’s a sample Ant file (which assumes that groovy-all-1.0-jsr-05.jar
is in the same directory):

<?xml version="1.0" ?>   
<project name="groovy-test" default="test" >

  <taskdef name="groovy" 
  <target name="test">
      println "Running in Groovy"
      fs = ant.fileset (dir: ".", casesensitive: "no") 
          include (name: "*.groovy")
          include (name: "*.java")
          exclude (name: "Test.*;test.*")
          ant.echo (it)


Buildfile: build.xml

   [groovy] Running in Groovy
     [echo] test2.groovy
   [groovy] statements executed successfully

Total time: 1 second

Other bits and bobs

There’s a lot more to Groovy than what’s presented above. It has operator overloading, syntax to
make regular expressions and multi-line strings easier, simple property definition and access, built-in
JUnit integration, an XPath-like expression language and much more besides. Read the home page for some of these – but be warned that some features are
pretty well hidden.

So, what’s wrong with it?

In general, I like Groovy. I’m not convinced that the productivity gains from it are worth the
downsides for major apps, but it’s really handy for getting something small working quickly. It could
be really great for prototyping. I may well eventually be convinced that “dynamic typing” isn’t
that dangerous really, and doesn’t have a detrimental impact on the usability of libraries, etc. Only
time will tell.

In the meantime, however, Groovy does suffer majorly from a lack of polish. There are plenty of bugs
to be found, and the documentation is terrible. (The members of the mailing list are more than happy to help, and a major documentation update is under way, however.) There are aspects of the syntax which seem to be
overkill, creating complexity without a huge benefit, and there are bits of normal Java which are
just “missing”. (Normal for loops aren’t available in the version I’m using, although
I believe they will be in the next available release. You can use a loop such as for (i in 0..9),
but not for (int i=0; i < 9; i++).) Things like this should really be fixed to make
as much of normal Java as possible available within Groovy.

I don’t mind the fact that Groovy isn’t finished – my worry is that it may never really be finished.
I really hope that I’m wrong, and that it will be all done and dusted (for v1) in the summer.
There’s no lack of activity – the community is very lively – but activity doesn’t necessarily indicate
actual progress towards a goal. Since originally posting this blog entry, I have been assured that
real progress is being made, so I’m keeping my fingers crossed.


  • Groovy home page
  • “Groovy JDK” – the extra methods added to various classes
  • Grails – Groovy/Spring/Hibernate-based web application devlopment

Faith blog

I decided today that I could do with somewhere to dump random thoughts about my faith in the same way that I dump random thoughts about computing here. Clearly this isn’t the right place to do it, so I’ve set up a Faith Blog with Blogger. Some of you may be interested in it. Don’t let it bother you if not. This is the last you’ll hear about it unless there’s some topic which directly affects both areas.

MSDN Product Feedback Center – use it!

The Open Source community has known for ages that making it easy for users to file bugs and feature requests is a great way of making sure that not only are more bugs noticed, but that the bugs which actually annoy users take priority over those which never crop up in real life. For a little while now, MS has been doing the same – although the world can be forgiven for barely noticing. The MSDN Product Feedback Center isn’t exactly hidden in a locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard”, but it’s not far off. It’s on the MSDN lab which has a label at the top saying “MSDN Lab projects are experimental and may be removed without notice”. That’s not exactly encouraging when it comes to taking the time to file a bug report.

However, a few MS managers have now emphasised that this way of reporting bugs is really important to them. The entries end up in the internal bugs database which developers use – and importantly, if a bug is found and regarded as important by a customer during (say) beta test, that’s much more likely to be able to get through the red tape required for a late change than one which is found internally. If you’re an MVP, there’s an added bonus that your bugs are automatically regarded as “valid” and so they’re even more likely to be fixed.

I should point out that only bugs/requests with respect to certain products can be entered at the moment – but the list is likely to grow, I suspect. Anyway, the important thing is that it’s there, it’s pretty easy to use, and it makes a difference – so use it!

Vista and External Memory Devices

Update – read the first two comments. I'm leaving the rest of the article as it is in order to avoid revisionism. The solution is in the first two comments though.

According to the Windows Vista feature page, Vista is going to be able to use external memory devices (USB flash drives and the like to you and me) to act as extra memory to save having to go to the hard disk. I've heard this mentioned at a few places, and it's always asserted that EMDs are slower than memory but "much, much faster" than disks. This has just been stated as a fact that everyone would just go along with. I've been a bit skeptical myself, so I thought I'd write a couple of very simple benchmarks. I emphasise the fact that they're very simple because it could well be that I'm missing something very important.

Here are three classes. Writer just writes out however many blocks of 1MB data you ask it to, to whichever file you ask it to. Reader simply reads a whole file in 1MB chunks. RandomReader reads however many 1MB chunks you ask it to, seeking randomly within the file between each read.


using System;
using System.IO;

public class Writer
    static void Main(string[] args)
        Random rng = new Random();
        byte[] buffer = new byte[1024*1024];
        DateTime start = DateTime.Now;
        using (FileStream stream = new FileStream (args[0], FileMode.Create))
            for (int i=0; i < int.Parse(args[1]); i++)
                stream.Write(buffer, 0, buffer.Length);
        DateTime end = DateTime.Now;
        Console.WriteLine (end-start);


using System;
using System.IO;

public class Reader
    static void Main(string[] args)
        byte[] buffer = new byte[1024*1024];
        DateTime start = DateTime.Now;
        int total=0;
        using (FileStream stream = new FileStream (args[0], FileMode.Open))
            int read;
            while ( (read=stream.Read (buffer, 0, buffer.Length)) > 0)
                total += read;
        DateTime end = DateTime.Now;
        Console.WriteLine (end-start);
        Console.WriteLine (total);


using System;
using System.IO;

public class RandomReader
    static void Main(string[] args)
        byte[] buffer = new byte[1024*1024];
        Random rng = new Random();
        DateTime start = DateTime.Now;
        int total=0;
        using (FileStream stream = new FileStream (args[0], FileMode.Open))
            int length = (int) stream.Length;
            for (int i=0; i < int.Parse(args[1]); i++)
                stream.Position = rng.Next(length-buffer.Length);                
                total += stream.Read (buffer, 0, buffer.Length);
        DateTime end = DateTime.Now;
        Console.WriteLine (end-start);
        Console.WriteLine (total);

I have five devices I can test: a 128MB Creative Muvo (USB), a 1GB PNY USB flash drive, a Viking 512MB SD card, my laptop hard disk (fairly standard 60GB Hitachi drive) and a LaCie 150GB USB hard disk. (All USB devices are USB 2.0.) The results are below. This is pretty rough and ready – I was more interested in the orders of magnitude than exact figures, hence the low precision given. All figures are in MB/s.

Drive Write Stream read Random read
Internal HDD 17.8 24 22
External HDD 14 20 22
SD card 2.3 7 8.3
1GB USB stick 3.3 10 10
128MB USB stick 1.9 2.9 3.5

Where possible, I tried to reduce the effects of caching by mixing the tests up, so I never ran two tests on the same location in succession. Some of the random reads will almost certainly have overlapped each other within a test, which I assume is the reason for some of the tests showing faster seek+read than streaming reads.

So, what's wrong with this picture? Why does MS claim that flash memory is much faster than hard disks, when my flash drives appear to be much slower than my laptop and external drives? (Note that laptop disks aren't noted for their speed, and I don't have a particularly fancy one.) It doesn't appear to be the USB bus – the external hard disk is fine. The 1GB stick and the SD card are both pretty new, although admittedly cheap. I doubt that either of them are worse quality than the majority of flash drives in the hands of the general public now, and I don't expect the average speed to radically increase between now and the Vista launch, in terms of what people actually own.

I know my tests don't accurately mimic how data will be accessed by Vista – but how is it so far out? I don't believe MS would have invested what must have been a substantial amount of resource into this feature without conducting rather more accurate benchmarks than my crude ones. I'm sure I'm missing something big, but what is it? And if flash can genuinely work so much faster than hard disks, why do flash cards perform so badly in simple file copying etc?

Inheritance Tax


There aren’t many technical issues that my technical lead (Stuart) and I disagree on.
However, one of them is inheritance and making things virtual. Stuart tends to favour
making things virtual on the grounds that you never know when you might need to inherit from
a class and override something. My argument is that unless a class is explicitly designed
for inheritance in the first place, you can get into a big mess very quickly. Desiging a
class for inheritance is not a simple matter, and in particular it ties your
implementation down significantly. Composition/aggregation usually works better in
my view. This is not to say that inheritance isn’t useful – like regular expressions,
inheritance of implementation is incredibly powerful and I certainly wouldn’t dream of
being without it. However, I find it’s best used sparingly. (Inheritance of interface is a
different matter – I happily use interfaces all the time, and they don’t suffer from the
same problems.) I suspect that much of my wariness is due to a bad experience I had with
java.util.Properties – so I’ll take that as a worked example.

Note: I’ll use the terms “derived type” and “subclass” (along with their related
equivalents) interchangably. This post is aimed at both C# and Java developers, and I can’t
get the terminology right for both at the same time. I’ve tended to go with whatever sounds
most natural at the time.

For those of you who aren’t Java programmers, a bit of background about the class.
Properties represents a “string to string” map, with strongly typed methods
(getProperty and setProperty) along with methods to save and
load the map. So far, so good.

Something we can all agree on…

The very first problem with Properties itself is that it extends
Hashtable, which is an object to object map. Is a string to string map
actually an object to object map? This is actually a question which has come up a lot
recently with respect to generics. In both C# and Java, List<String>
is not viewed as a subtype of List<Object>, for instance. This can
be a pain, but is logical when it comes to writable lists – you can add any object
to a list of objects, but you can only add a string to a list of strings. Co-variance
of type parameters would work for a read-only list, but isn’t currently available in C#.
Contravariance would work for a write-only list (you could view a list of objects as a list
of strings if you’re only writing to it), although that situation is less common, not to
mention less intuitive. I believe the CLR itself supports non-variance, covariance and
contravariance, but it’s not available in C# yet. Arguably generics is a complicated
enough topic already, without bringing in further difficulties just yet – we’ll have to
live with the restrictions for the moment. (Java supports both types of variance to
some extent with the ? extends T and ? super T syntax. Java’s
generics are very different to those in .NET, however.)

Anyway, java.util.Properties existed long before generics were a twinkle
in anyone’s eye. The typical “is-a” question which is usually taught for
determining whether or not to derive from another class wasn’t asked carefully enough in
this case. I believe it’s important to ask the question with Liskov’s Substitution Principle
in mind – is the specialization you’re going to make entirely compatible with
the more general contract? Can/should an instance of the derived type be used as if it were
just an instance of the base type?

The answer to the “can/should” question is “no” in the case of Properties, but
in two potentially different ways. If Properties overrides put (the
method in Hashtable used to add/change entries in the map) to prevent non-string
keys and values from being added, then it can’t be used as a general purpose Hashtable
– it’s breaking the general contract. If it doesn’t override put then a
Properties instance merely shouldn’t be used as a general purpose
Hashtable – in particular, you could get surprises if one piece of code added
a string key with a non-string value, treating it just as a Hashtable, and then
another piece of code used getProperty to try to retrieve the value of that key.

Furthermore, what happens if Hashtable changes? Suppose another method is added which
modifies the internal structure. It wouldn’t be unreasonable to create an add method
which adds a new key/value pair to the map only if the key isn’t already present. Now, if
Properties overrides put, it should really override add as
well – but the cost of checking for new methods which should potentially be overridden every time a
new version comes out is very high.

The fact that Properties derived from Hashtable
also means that its threading mechanisms are forever tied to those of Hashtable.
There’s no way of making it use a HashMap internally and managing the thread
safety within the class itself, as might be desirable. The public interface of
Properties shouldn’t be tied to the fact that it’s implemented using
Hashtable, but the fact that that implementation was achieved using
inheritance means it’s out in the open, and can’t be changed later (without abandoning
making use of the published inheritance).

So, hopefully we can all agree that in the case of java.util.Hashtable and
java.util.Properties at least, the choice to use inheritance instead of aggregation
was a mistake. So far, I believe Stuart would agree.

Attempting to specialize

Now for the tricky bit. I believe that if you’re going to allow a method to be overridden
(and methods are virtual by default in Java – fortunately not so in C#) then you need to document
not only what the current implementation does, but what it’s called from within the rest of
the class. A good example to demonstrate this comes from Properties again.

A long time ago, I wrote a subclass of Properties which had a sort of hierarchy.
If you had keys "X", "" and
"foo.baz" you could ask an instance of this hierarchical properties type for a
submap (which would be another instance of the same type) for "foo". The returned
map would have keys "bar" and "baz". We used this kind of hierarchy
for configuration. If you’re thinking that XML would have been a better fit, you’re right.
(XML didn’t actually exist at the time, and I don’t know if there were any SGML libraries around
for Java. Either way, this was a reasonably simple way of organising configuration.

Now the question of whether or not I should have been deriving from Properties
in the first place is an interesting one. I don’t think there’s any reason anyone couldn’t or
shouldn’t use an instance of the PeramonProperties (as it was unfortunately called)
class as a normal Properties object, and it certainly helped when it came to other
APIs which wanted to use a parameter of type Properties. As it happens, I believe
we did run into a versioning problem, in terms of wanting to override a method of
Properties which only appeared in Java version 1.2, but only when compiling against
1.2. It’s certainly not crystal clear to me now whether we did the right thing or not – there
were definite advantages, and it wasn’t as obviously wrong as the inheritance from Hashtable
to Properties, but it wasn’t plain sailing either.

I needed to override getProperty – but I wanted to do it in the simplest possible way.
There are two overloads for getProperty, one of which takes a default value and one
of which just assumes a default value of null. (The default is returned if the key isn’t
present in the map.) Now, consider three possible implementations of getProperties in
Properties (get is a method in Hashtable which returns
the associated value or null. I’m leaving aside the issue of what to do if a non-string
value has been put in the map.)

First version: non-defaulting method delegates to defaulting

public String getProperty (String key)
    return getProperty (key, null);
public String getProperty (String key, String defaultValue)
    String value = (String) get(key);
    return (value == null ? defaultValue : value);

Second version: defaulting method delegates to non-defaulting

public String getProperty (String key)
    return (String) get(key);
public String getProperty (String key, String defaultValue)
    String value = getProperty (key);
    return (value == null ? defaultValue : value);

Third version: just calling base methods

public String getProperty (String key)
    return (String) get(key);

public String getProperty (String key, String defaultValue)
    String value = (String) get(key);
    return (value == null ? defaultValue : value);

Now, when overriding getProperty myself, it matters a great deal what the implementation
is – because I’m likely to want to call one of the base overloads, and if that in turn calls
my overridden getProperty, we’ve just blown up the stack. An alternative is to override
get instead, but can I absolutely rely on Properties calling get?
What if in a future version of Java, Hashtable adds an overload for get which
takes a default value, and Properties gets updated to use that instead of the signature
of get that I’ve overridden?

There’s a pattern in all of the worrying above – it involves needing to know the implementation
of a the class in order to override anything sensibly. That should make two parties nervous – the
ones relying on the implementation, and the ones providing the implementation. The ones
relying on it first have to find out what the implementation currently is. This is hard enough
sometimes even when you’ve got the source – Properties is a pretty straightforward
class, but if you’ve got a deep inheritance hierarchy with a lot of interaction going on it can
be a pain to work out what eventually calls what. Try doing it without the source and you’re in
real trouble). The ones providing the implementation should be nervous because they’ve now effectively
exposed something which they may want to change later. In the example of Hashtable providing
get with an overload taking a default value, it wouldn’t be unreasonable for the
authors of Properties to want to make use of that – but because they can’t change the
implementation of the class without potentially breaking other classes which have overridden
get, they’re stuck with their current implementation.

Of course, that’s assuming that both parties involved are aware of the risks. If the author
of the base class doesn’t understand the perils of inheritance, they could easily change the
implementation to still fulfill the interface contract, but break existing subclasses. They
could have all the unit tests required to prove that the implementation was, in itself, correct –
but that wouldn’t help the poor subclass which was relying on a particular implementation.
If the author of the subclass doesn’t understand the potential problems – particularly if
the way they first overrode methods just happened to work, so they weren’t as aware as they
might be that they were relying on a specific implementation – then they may not
do quite as much checking as they should when a new version of the base class comes out.

Does this kill inheritance?

Having proclaimed doom and gloom so far, I’d like to emphasise that I’m not trying
to say that inheritance should never be used. There are many times when it’s fabulously
useful – although in most of those cases an interface would be just as useful from a client’s
point of view, possibly with a base class providing a “default implementation” for use where
appropriate without making life difficult for radically different implementations (such as
mocks :)

So, how can inheritance be used safely? Here are a few suggestions – they’re not absolute
rules, and if you’re careful I’m sure it’s possible to have a working system even if you
break all of them. I’d just be a bit nervous when trying to change things in that state…

  • Don’t make methods virtual unless you really need to. Unless you can think of a reason
    why someone would want to override the behaviour, don’t let them. The downside of this
    is that it makes it harder to provide mock objects deriving from your type – but interfaces
    are generally a better answer here.
  • If you have several methods doing a similar thing and you want to make them virtual,
    consider making one method virtual (possibly a protected method) and making all
    the others call the virtual method. That gives a single point of access for derived classes.
  • When you’ve decided to make a method virtual, document all other paths that will call
    that method. (For instance, in the case above, you would document that all the similar
    methods call the virtual one.) In some cases it may be reasonable to not document the
    details of when the method won’t be called (for instance, if a particular
    parameter value will always result in the same return value for one overload of a method,
    you may not need to call anything else). Likewise it may be reasonable to only document
    the callers on the virtual method itself, rather than on each method that calls it.
    However, both of these can affect an implementation. This documentation becomes
    part of the interface of your class – once you’ve stated that one method will call
    another (and implicitly that other methods won’t call the virtual method) any
    change to that is a breaking change in the same way that changing the acceptable parameters
    or the return value is. You should also consider documenting what the base implementation
    of the method does (and in particular what other methods it calls within the same class) –
    quite often, an override will want to call the base implementation, but it can be difficult
    to know how safe this is to do or at what point to call it unless you know what the
    implementation really does.
  • When overriding a method, be very careful which other methods in the base class you
    call – check the documentation to make sure you won’t be causing an infinitely
    recursive loop. If you’re deriving from one of your own types and the documentation
    isn’t explicit enough, now would be a very good time to improve it. You might also
    want to make a note in the base class that you’re overriding the method in the specific
    class so that you can refer to the overriding method if you want to change the base class
  • If you make any assumptions when overriding a method, consider writing unit tests to document
    those assumptions. For instance, if you assume that calling method X will result in a call to
    your overridden method Y, consider testing that path as well as the path where method Y is
    called directly. This will help to give you more confidence if the base type is upgraded to
    a newer version. (This shouldn’t be considered a replacement for careful checking when
    the base type is upgraded to a new version though – indeed, you may want to add extra tests
    due to an expanding API etc.)
  • Take great care when adding a new virtual method in Java, as any existing derived class which
    happens to have a method of the same name will automatically override it, usually
    with unintended consequences. If you’re using Java 1.5/5.0, you can use the @Override
    annotation to specify that you intend to override a method. Some IDEs (such as Eclipse) have
    options to make any override which doesn’t have the @Override annotation result
    in a compile-time error or warning. This gives a similar degree of safety to C#’s requirement
    to use the override modifier – although there’s still no way of providing a “new”
    method which has the same signature as a base type method but without overriding it.
  • If you upgrade the version of a type you’re using as a base type, check for any changes in
    the documentation, particularly any methods you’ve overridden. Look at any new methods which
    you’d expect to call your overridden method – and any you’d expect not to!

Many of these considerations have different effects depending on the consumer of the type.
If you’re writing a class library for use outside your development team or organisation,
life is harder than in a situation where you can easily find out all the uses of a particular
type or method. You’ll need to think harder about what might genuinely be useful to override
up-front rather than waiting until you have a need before making a method virtual (and then
checking all existing uses to ensure you won’t break anything). You may also want to give more
guidance – perhaps even a sample subclass – on how you envisage a method being overridden.


You should be very aware of the consequences of making a method virtual. C# (fortunately in my view)
makes methods non-virtual by default. In an interview
Anders Hejlsberg explained the reasons for that decision, some of which are along the same lines as
those described here. Java treats methods as virtual by default, using Hotspot to get round the performance
implications and largely ignoring the problems described here (with the @Override annotation
coming late in the day as a partial safety net). Like many powerful tools, inheritance of implementation
should be used with care.

Worst product names ever?

A while ago, I decided that EasyMock.NET wasn’t quite up to scratch, and I was going to try to write a replacement. I’m not doing much .NET development at the moment, so it’s not an issue any more (and when I go back to .NET I’ll look at Rhino Mocks which sounds promising). However, I did get as far as picking a name for the new project. I settled on PowerMock in the end, but went through a pun worthy of Simon Tatham first…

Firstly, it would have to start with “N” wouldn’t it? NUnit, NCover etc… Well, what sounds a bit like mocking something, and starts with “N”. How about “NSult”, to be pronounced “insult”?

Next, suppose we wanted a new unit test system at the same time. Maybe something like TestNG but for .NET. Something to judge your code and decide whether it was good or not. Ooh, that sounds a bit like a court-room drama. How about “NJury”?

Of course, they work best when you put them together – adding NSult to NJury…

Sorry. I’ll go to bed now, promise.