Category Archives: Uncategorized

Elegant comparisons with the null coalescing operator

A while ago I commented on how I’d like a return? statement, which
only returned if the return value was non-null. The purpose of this was to remove
the irritation of implementing Equals and IComparable.CompareTo
on classes with several properties. For an example of the kind of thing I mean,
consider an Address class with properties Country,
State, City, ZipCode and HouseNumber.
(Apologies to readers who aren’t American – while I feel a traitor to my country for
using state instead of county and zip code instead of post code, I’m guessing there are
more readers from the US than from elsewhere.)

This Address class needs (for whatever reason) to be comparable to itself,
comparing the properties in the order described above, in normal string comparison order.
Let’s see how annoying that is without doing anything clever. (I haven’t included
any property implementations or constructors, but I’m sure you can all guess what they’d
look like. Similarly, I haven’t overridden object.Equals or object.Hashcode,
but the implementations are trivial.)

using System;

public sealed class Address : IComparable<Address>
{
    string country;
    string state;
    string city;
    string zipCode;
    int houseNumber;
    
    public int CompareTo(Address other)
    {
        if (other==null)
        {
            return 1;
        }
        int ret = country.CompareTo(other.country);
        if (ret != 0)
        {
            return ret;
        }
        ret = state.CompareTo(other.state);
        if (ret != 0)
        {
            return ret;
        }
        ret = city.CompareTo(other.city);
        if (ret != 0)
        {
            return ret;
        }
        ret = zipCode.CompareTo(other.zipCode);
        if (ret != 0)
        {
            return ret;
        }
        return houseNumber.CompareTo(other.houseNumber);
    }
}

That’s ignoring the possibility of any of the properties being null. If
we want to include that possibility, it’s worth having a static helper method which
copes with nulls, along the lines of object.Equals(object, object).

Now, if we don’t care about doing more comparisons than we really want to and
potentially creating an array each time, it wouldn’t be hard to implement a series
of overloaded methods along the lines of:

public static int ReturnFirstNonZeroElement(int first,
                                            int second,
                                            int third)
{
    return first != 0 ? first :
           second != 0 ? second :
           third;
}

(The array part would be when you implement ReturnFirstNonZeroElement(params int[] elements)
after you’d got enough overloads to get bored.)

That still ends up being a lot of code though, and it’s doing unnecessary comparisons.
I’m not keen on micro-optimisation, of course, but it’s the inelegance of it that
bothers me. It feels like there must be a way of doing it nicely. With
C# 2.0 and the null coalescing operator, we do. (At this point I’m reminded that
the irritation actually came when writing Java, which of course doesn’t have anything
similar. Grr.) For those who are unaware of the null coalescing operator (and it’s one of the
least well publicised new features in C# 2.0) see
my brief coverage of it.
Now consider the following helper method:

public static int? CompareFirstPass<T>(IComparable<T> first, T second) 
    where T : IComparable<T>
{
    if (first==null)
    {
        return -1;
    }
    // Assume CompareTo deals with second being null correctly
    int comparison = first.CompareTo(second);
    return comparison==0 ? (int?)null : comparison;
}

In short, this returns the result of the comparison if it’s non-zero, or null otherwise.
Now, with the null coalescing operator, this allows the Address class implementation
of CompareTo to be rewritten as:

public int CompareTo(Address other)
{        
    return other==null ? 1 :
           Helper.CompareFirstPass(country, other.country) ??
           Helper.CompareFirstPass(state, other.state) ??
           Helper.CompareFirstPass(city, other.city) ??
           Helper.CompareFirstPass(zipCode, other.zipCode) ??
           houseNumber.CompareTo(other.houseNumber);
}

It’s short, simple and efficient. Now, doesn’t that make you feel better? :)

Broken windows and unit testing

There’s quite possibly only one person in the world reading this blog who doesn’t think
it’s got anything to do with Vista. The windows in the title have nothing to do with
Microsoft, and I’m making no assertions whatsoever about how much unit testing gets done
there.

The one person who understands the title without reading the article is Stuart,
who lent me The Tipping Point
before callously leaving for ThoughtWorks,
a move which has signficantly reduced my fun at work, with the slight compensation
that my fashionable stripy linen trousers don’t get mocked quite as much. The Tipping
Point is a marvellous book, particularly relevant for anyone interested in cultural
change and how to bring it about. I’m not going to go into too much detail about the
main premises of the book, but there are two examples which are fascinating in and
of themselves and show a possible path for anyone battling with introducing
agile development practices (and unit testing in particular) into an existing
environment and codebase.

The first example is of a very straightforward study: look at unused buildings, and
how the number of broken windows varies over time, depending on what is done with
them. It turns out that a building with no broken windows stays “pristine” for a
long time, but that when just a few windows have been broken, many more are likely
to be broken in a short space of time, as if the actions of the initial vandals
give permission to other people to break more windows.

The second example is of subway trains in New York, and how an appalling level
of graffiti on them in the 80s was vastly reduced in the 90s. Rather than trying
to tackle the whole problem in one go by throwing vast resources at the system,
or by making all the trains moderately clean, just a few trains were selected
to start with. Once they had been cleaned up, they were never allowed to run
if they had graffiti on them. Furthermore, the train operators noticed a pattern
in terms of how long it would take the “artists” in question to apply the graffiti,
and they waited until three nights’ work had been put in before cleaning the
paint off. Having transformed one set of trains, those trains were easier to keep
clean due to the “broken windows” effect above and the demotivating aspects of
the cleaning. It was then possible to move onto the next set, get them clean
and “stable”, then move on again.

I’m sure my readership (pretentious, eh?) is bright enough to see where this is
leading in terms of unit testing, but this would be a fairly pointless post if I
stopped there. Here are some guidelines I’ve found to be helpful in “test infecting” code,
encouraging good practice from those who might otherwise be sloppy (including myself)
and keeping code clean once it’s been straightened out in the first place. None of
them are original, but I believe the examples from The Tipping Point cast them in
a slightly different light.

Test what you work with

If you need to make a change in legacy code (i.e. code without tests), write
tests for the existing functionality first. You don’t need to test all of it,
but do your best to test any code near the points you’ll be changing. If you
can’t test what’s already there because it’s a
Big Ball of Mud
then refactor it very carefully until you can test it. Don’t start
adding the features you need until you’ve put the tests in for the refactored
functionality, however tempting it may be.

Anyone who later comes to work on the code should be aware that there are
unit tests around it, and they’re much more likely to add their own for whatever
they’re doing than they would be if they were having to put it under test
for the first time themselves.

Refactor aggressively

Once code is under test, even the nastiest messes can gradually get under
control, usually. If that weren’t the case, refactoring wouldn’t be much use,
as we tend to write terrible code when we first try. (At least, I do. I haven’t
seen much evidence of developers whose sense of design is so natural that
elegance flows from their fingers straight into the computer without at
least a certain amount of faffing. Even if they got it right for the current
situation, the solution isn’t likely to look nearly as elegant in a month’s
time when the requirements have changed.)

If people have to modify code which is hard to work with, they’ll tend to
add just enough code to do what they want, holding their nose while they do it.
That’s likely to just add to the problem in the long run. If you’ve refactored
to a sane design to start with, contributing new elegant code (after a couple
of attempts) is not too daunting a task.

Don’t tinker with no purpose

This almost goes against the point above, but not quite. If you don’t need
to work in an area, it’s not worth tinkering with it. Unless someone (preferrably
you) will actually benefit from the refactoring, you’re only likely to provoke
negative feelings from colleagues if you start messing around. I had a situation
like this recently, where I could see a redundant class. It would have taken maybe
half an hour to remove it, and the change would have been very safe. However,
I wasn’t really using the class directly. Not performing the refactoring didn’t
hurt the testing or implementation of the classes I was actually changing, nor was it
likely to do so in the short term. I was quite ready to start tinkering anyway,
until a colleague pointed out the futility of it. Instead, I added a comment suggesting
that the class could go away, so that whoever really does end up in that area
next at least has something to think about right from the start. This is as much
about community as any technical merit – instead of giving the impression that
anything I had my own personal “not invented here” syndrome (and not enough “real work”
to do), the comment will hopefully provoke further thought into the design decisions
involved, which may affect not just that area of code but others that colleagues work
on. Good-will and respect from colleagues can be hard won and easily lost, especially
if you’re as arrogant as I can sometimes be.

Don’t value consistency too highly

The other day I was working on some code which was basically using the wrong naming
convention – the C# convention in Java code. No harm was being done, except everything
looked a bit odd in the Java context. Now, in order to refactor some other code towards
proper encapsulation, I needed to add a method in the class with the badly named methods.
Initially, I decided to be consistent with the rest of the class. I was roundly (and
deservedly) told off by the code reviewer (so much for the idea of me being her mentor –
learning is pretty much always a two-way street). As she pointed out, if I added another
unconventional name, there’d be even less motivation for anyone else to get things right in
the future. Instead of being a tiny part of the solution, I’d be adding to the problem.
Now, if anyone works in that class, I hope they’ll notice the inconsistency and be encouraged
to add any extra methods with the right convention. If they’re changing the use of an existing
method, perhaps they’ll rename it to the right convention. In this way, the problem can gradually
get smaller until someone can bite the bullet and make it all consistent with the correct
convention. In this case, the broken windows story is almost reversed – it’s as if I’ve
broken a window by going against the convention of the class, hoping that all the rest of
the windows will be broken over time too.

This was a tough one for me, because I’ve always been of the view that consistency
of convention is usually more important than the merit of the convention. The key here is that
the class in question was inconsistent already – with the rest of the codebase. It was only
consistent in a very localised way. It took me longer to understand that than it should have
done – thanks Emma!

Conclusion

Predicting and modifying human behaviour is an important part of software engineering
which is often overlooked. It goes beyond the normal “office politics” of jockeying for
position – a lot of this is just as valid when working solo on personal code. Part of
it is a matter of making the right thing to do the easy thing to do, too. If
we can persuade people that it’s easier to write code test-first, they’ll tend to
do it. Other parts involve making people feel bad when they’re being sloppy – which
follows naturally from working hard to get a particular piece of code absolutely clean just
for one moment in time.

With the right consideration for how future developers may be affected by changes we make
today – not just in terms of functionality or even design, but in attitude, we can
help to build a brighter future for our codebases.

The 7 Deadly Sins of Software Development


Introduction

Recently, Eric Gunnerson made a post
in his blog with the idea of “the seven deadly sins of
programmers”. Eric is posting his ideas for such a list one at a time, periodically. He invited others to write
their own lists, however, and it was such an intriguing idea that I couldn’t resist. The list is in descending
order of importance (I figure not everyone will make it to the bottom of this post) but the order is fairly
arbitrary anyway. Hopefully none of this will be a surprise to most of my readership, but it’s nice to write
as a sort of manifesto anyway. I’ve included a “personal guilt rating” out of 10 for each of these sins. I’m
very willing to change these in the light of feedback from those with experience of working with me :)

#1 – Overengineering (in complexity and/or performance)

Personal guilt rating: historically 8, currently 3

It’s amazing how much people care about performance. They care about the smallest things, even
if there’s not a chance that those things will have a significant impact in reality. A good example
of this is implementing a singleton. I’ve seen
the double-checked lock algorithm (as described on the page, except often broken) repeatedly brought
forward as the best way to go. There’s rarely any mention of the fact that you have to get it just right
(in terms of making the variable volatile or using explicit memory barriers) in order for it to be properly
thread-safe. There’s no way of making it work in Java, so anyone porting code later will end up with a bug
they’re very unlikely to be aware of. Yet people use it over simply locking every time on the grounds of
performance. Now, acquiring an uncontested lock is very, very cheap in .NET. Yes, it’ll be slightly more expensive
on multi-processor boxes, but it’s still mind-bogglingly quick. The time taken to lock, check a variable for nullity
and then unlock is unlikely to cause a significant issue in almost any real world application. There may be a very,
very few where it would become a problem – but developers of those applications can profile and change the code
when they’ve proved that it’s a problem. Until that time, using double-checked locking just adds complexity
for no tangible benefit.

That’s just a single example. People are always asking performance questions on the newsgroups which just won’t
make a difference. For example, if you have a string reference in an Object expression, is it faster
to cast it to String or to call the toString method? Now, don’t get me wrong – I find
it very interesting to investigate this kind of thing, but it’s only as a matter of interest – not because I think
it should affect what code should be written. Whatever the result, the simplest code which achieves
the result in the most readable fashion is the best code until performance has proved to be an issue.

I should stress that this doesn’t extend to being stupid about performance. If I’m going to concatenate an unknown
number of strings together, I’ll use StringBuilder
of course – that’s what it’s designed for, and I’ve seen that it can make a huge difference in real world situations.
That’s the key though – it’s evidence based optimisation. In this case it’s general past evidence, whereas for many other
situations I’d use application-specific evidence, applying an optimisation which may make the code harder to read only
when I’ve proved that the particular application in question is suffering to a significant extent.

The other side of this issue is complexity in terms of what the solution can achieve. These days,
I mostly code for testability, knowing that if I can test that my code does what it wants to, chances are
it’ll be flexible enough to meet my needs without being overly complicated. I look for complexity all over the place,
particularly in terms of trying to anticipate a complicated requirement without knowing that the requirement will
actually be used. For instance, if I’m writing some kind of collection (not that that happens often), I won’t add sorting capabilities
until I know they’re needed. It just creates more work if you write unnecessary code. Of course, when writing libraries for
public consumption, things become much trickier – you basically can’t tell how a library will be used ahead of time,
so you may well end up adding features which are rarely used. The closer the communication between the code and its clients,
the better. (This is also relevant in terms of performance. One area I did spend time optimising was the
enhanced locking capabilities part of my miscellaneous
utility library. Locking is cheap, so any replacement for “standard” locking should also be cheap, otherwise it discourages
its own use.)

When I look at a design and see that it’s simple, I’m happy. When I look at implementation and see that it’s simple, I’m happy.
It’s not “clever” to write complicated code. Anyone can write complicated code – particularly if they don’t check that it works.
What takes time is writing simple code which is still powerful.

#2 – Not considering the code’s readership

Personal guilt rating: 4

This is actually closely linked to the first sin, but with a more human face. We know that code is generally in
maintenance for longer than it’s under initial development. We know that companies often (not always, thankfully) put their
“top developers” (however they decide to judge that) onto feature development rather than maintenance. These make it
essential that we consider “the next guy” when writing our code. This doesn’t necessarily mean reams of documentation –
indeed, too much documentation is as bad as too little documentation, as it can make the code harder to read (if it’s
inline code documentation) or be hard to find your way around (if it’s external documents). One project I was working on
decided to extract one document describing data formats from the main system architecture document, and we found that the
extracted document became absolutely crucial to both testers and coders. That document was kept accurate, and was short enough
to be easy to follow. The document from which it was extracted was rarely used.

Of course, the simpler the code, the less documentation is required. Likewise, well-written unit tests can often express the
correct behaviour (and expected use) of a class more succinctly than lots of documentation – as well as being kept accurate
automatically.

There are times when writing good documentation is very difficult indeed. Recently I wrote a class which did exactly the right
thing, and did it in an elegant manner. However, explaining what its purpose was even in person was difficult. Understanding
it from just the documentation would be almost impossible – unless the reader looked at what problem was being solved and worked
through what was required, which would lead fairly naturally to the same kind of solution. I’m not proud of this. I’m proud of the
class itself, but I don’t like finding myself stuck for words.

Sometimes, simple code (in terms of number of characters) is quite complicated. At one point on a project I had to find
whether only a single bit was set in a long. I’m no good at remembering the little tricks involved for this kind of thing,
but I’m aware they exist, and using them can be a lot more reliable than writing bit-twiddling loops to do things in a
long-winded fashion. In this case, we found the appropriate trick on the web, and included a link in the code. Without the link
(or at least a description in a comment) the code would have been effectively incomprehensible to anyone who didn’t recognise the
trick.

Code reviews definitely help readability – when they’re done properly. In some ways, they can be better than pair programming
for this, particularly if the original developer of the code doesn’t try to explain anything to the reviewer. At this point
the reviewer is able to take on the role of the first person who has to maintain the code – if anything isn’t clear just from
what’s available in terms of code and documentation (as opposed to human intervention) then that probably needs a bit of work.
Not verbally explaining what’s happening when you’re the author is incredibly difficult to do, and I’m very bad at it. It adds
time to the review, and when you’re under a lot of pressure (and who isn’t?) it can be very frustrating to watch someone
painstakingly understanding your code line by line. This is not to say that code reviews shouldn’t be the source of discussion
as well, of course – when I’m working with people I respect, I rarely come through a review without some changes to my code,
and the reverse is true too. After all, what are the chances that anyone gets something absolutely right to start with?

#3 – Assuming your code works

Personal guilt rating: historically 9, currently 3

Ever since I heard about unit testing I’ve seen the appeal, but I didn’t start to use
it regularly until 2005 when I joined Clearswift and met Stuart. Until then, I hadn’t heard about mock objects
which I find absolutely crucial in unit testing. I had read articles and seen examples of unit tests, but everything seemed to fall
apart when I tried to write my own – everything seemed to need something else. Of course, taken to extremes this is often a fault
of the code itself, where some classes require a complete system to be up and running before they can do anything. Unit testing
such a beast is difficult to say the least. In other situations you merely need to be able to specify a collaborator, which is where
mock objects come in.

So, since finding out about mock objects I’ve been more and more keen on unit testing. I know it’s not a silver bullet – I know that more
testing is required, both at an integration level between components, and at a system level, sometimes including manual tests. However,
once I started unit testing regularly, I got a sense of just how often code which I’d assumed would work simply wouldn’t. I’ve seen examples
of code which could never have possibly worked, with any input – so they can’t possibly have been used, or the results weren’t actually
important in the first place.

These days, I get frustrated when I either have to work with code which isn’t under test and can’t be easily put under
test due to the design (see point 7 on Eric’s list, excessive coupling) or when I’m writing code which is necessarily difficult to test.
Obviously I try to minimise the amount of the code which really can’t be tested – but sometimes it’s a significant amount. Urgh.

I currently don’t have many tests around my Miscellaneous Utility Library. I’ve
resolved to add tests for any new features I add or bugs I find though. Someone mailed me about a bug within the Utf32String
class. In the process of writing a unit test to demonstrate that bug I found at least two or three others – and that wasn’t even
trying to exercise the whole code, just enough to get to the original bug. I only mention this as a defence against the “I don’t need
unit tests – my code doesn’t have bugs in it” mentality. I would really like to think that I’m a pretty good coder – but everyone mistakes.
Unit tests won’t catch all of them, but it gets an awful lot. It also acts as documentation and gives you a much better base on which to
refactor code into the right shape, of course…

#4 – Using the wrong tool for the job

Personal guilt rating: 3

“When all you have is a hammer, everything looks like a nail.” That’s the typical quote used when tackling this topic. I hope this
sin is fairly self-explanatory. If all you know is Java and C#, you’ll find it a lot harder to solve problems which are best solved
with scripts, for instance. If all you know is C, you’ll find writing a complex web app a lot harder than you would with Java or
.NET. (I know it’s doable, and I used to do it – but it was really painful.) If you’re writing a device driver, you’ll find life
sucks pretty hard if you don’t know C or C++, I suspect.

Using the wrong tool for the job can be really damaging in terms of maintenance. While bad code can often be refactored into
good code over time (with a lot of effort) there are often significant implications in changing implementation language/technology.

This is a really good reason to make sure you keep yourself educated. You don’t need to necessarily keep up to date with all
the buzzword technologies – and indeed you’d find you did nothing else if you tried to keep up with everything – but there’s
always plenty to learn. Recently I’ve been looking at
Windows PowerShell
and Groovy. Next on my list is Squeak (Smalltalk).
I’ve been promising myself that I’d learn Smalltalk for years – at a recent Scrum
training course I met yet another Smalltalk evangelist, who had come from the Java side of things. It’s got to be worth trying…

#5 – Excessive code pride

Personal guilt rating: 2

I was recently pair programming and looking at some code Stuart had written a couple of weeks before. There were various bits
I wasn’t sure about, but thought were probably a bit smelly, and I asked my pairing partner to include a lot of comments
such as // TODO: Stuart to justify this code and
// TODO: Stuart to explain why this test is useful in the slightest. It’s worth bearing in mind at this point that
Stuart is significantly senior to me. With some people, comments like this would have been a career-limiting move. Stuart,
however, is a professional. He knows that code can usually be improved – and that however hard we try, we sometimes take
our eyes off the ball. Stuart had enough pride in his code to feel a need to fix it once the flaws had been pointed out,
but not enough pride to blind him to the flaws in the first place. This appropriate level of pride is vital when you’re working
with others, in my view. I don’t mind if people change my code – assuming they improve it. I expect that to happen
at a good code review, if I haven’t been pairing. A code review which is more of a rubber stamp than anything else is just a
waste of time.

This doesn’t mean I will always agree with others, of course. If I think my design/code is better than their suggested
(or even committed!) change I’m happy to put my case robustly (and with no deference to seniority) – but usually if someone’s
put in sufficient effort to understand and want to change my code in the first place, chances are they’ll think of something
I haven’t.

Write code you can be proud of – but don’t be proud to the point of stubbornness. Be prepared to take ownership of code you write
in terms of being responsible for your own problems – but don’t try to own it in terms of keeping other people out of it.

#6 – Failing to acknowledge weaknesses

Personal guilt rating: 2

ASP.NET, JSP, SQL, Perl, COBOL, Ruby, security, encryption, UML, VB… what do they all have in common? I wouldn’t claim
to “know” any of them, even though I use some on a regular basis. They are just some of my many technological weak spots.
Ask me a question about C# or Java in terms of the languages and I’ll be fairly confident about my chances of
knowing the answer (less so with generics). For lots of other stuff, I can get by. For the rest, I would need to immediately
turn to a book or a colleague who knows the subject. It’s important to know what you know and what you don’t.

The result of believing you know more than you actually do is usually writing code which just about works, at least in
your situation, but which is unidiomatic, probably non-performant, and quite possibly fails on anything other than the
data you’ve given it.

Ask for help when you need it, and don’t be afraid to admit to not being an expert on everything. No-one is an expert
at everything, even though Don Box does a pretty good impression of such a person…

#7 – Speaking with an accent

Personal guilt rating: 6 (but conscious and deliberate in some places)

Some of the worst Java code I’ve seen has come from C++ developers who then learned Java. This code typically
brings idioms of C/C++ such as tests like if (0==x) (which is safer than if (x==0) in C as missing
out an equals sign would just cause an accidental assignement rather than a compiler error. Similarly, Java code which
assumes that double and Double mean the same thing (as they do in C#) can end up behaving
contrary to expectations.

This is related to sin #6, in terms of people’s natural reaction to their own ignorance: find something similar that
they’re not ignorant about, and hope/assume that the similarities are enough to carry them through. In a way, this can
make it a better idea to learn VB.NET if you know Java, and C# if you know VB. (There are other pros and cons, of course –
this is only one tiny aspect.)

One way of trying to make yourself think in the language you’re actually writing in rather than the similar language you’re
more familiar with is to use the naming conventions of the target language. For instance, .NET methods are conventionally
PascalCased whereas Java methods are conventionally camelCased. If you see Pascal casing in Java code, look for other C#
idioms. Likewise, if you see method_names_with_underscores in either language, look for C/C++ idioms. (The most obvious C
idiom is likely to be checking return codes instead of using exceptions.)

Naming conventions are the most obvious “tell” of an accent, but sometimes it may be worth going against them deliberately.
For instance, I like the .NET conventions of prefixing interfaces with I, and of using Pascal casing for constants instead of
JAVA_SHOUTY_CONSTANTS. It’s important when you favour this sort of “breakaway” behaviour that you consult with the rest of
the team. The default should always be the conventions of the language you’re working in, but if the whole team decides that
using parts of a different convention helps more than it reduces consistency with other libraries, that’s reasonable.

What isn’t so reasonable is breaking the coding idioms of a language – simply because they other languages’ idioms
tend not to work. For example, RAII just doesn’t work in Java, and isn’t
automatic in C# either (you can “fake it” with a using statement, but I’m not sure that really counts as RAII).
Idiomatic code tends to be easier to read (particularly for those who are genuinely familiar with the language to start with)
and less semantically troublesome than “imported” styles from other languages.

Conclusion

So, that’s the lot. Ask me in a year and I may well have a different list, but this seems like a good starting point.
I’ve committed every sin in here at least once, so don’t feel too guilty if you have too. If you agree with them
and are still breaking them, that’s worth feeling a bit guilty about. If you disagree with them to start with,
that’s fair enough – add a comment to say why :)

Faith blog

I decided today that I could do with somewhere to dump random thoughts about my faith in the same way that I dump random thoughts about computing here. Clearly this isn’t the right place to do it, so I’ve set up a Faith Blog with Blogger. Some of you may be interested in it. Don’t let it bother you if not. This is the last you’ll hear about it unless there’s some topic which directly affects both areas.

Vista and External Memory Devices

Update – read the first two comments. I'm leaving the rest of the article as it is in order to avoid revisionism. The solution is in the first two comments though.

According to the Windows Vista feature page, Vista is going to be able to use external memory devices (USB flash drives and the like to you and me) to act as extra memory to save having to go to the hard disk. I've heard this mentioned at a few places, and it's always asserted that EMDs are slower than memory but "much, much faster" than disks. This has just been stated as a fact that everyone would just go along with. I've been a bit skeptical myself, so I thought I'd write a couple of very simple benchmarks. I emphasise the fact that they're very simple because it could well be that I'm missing something very important.

Here are three classes. Writer just writes out however many blocks of 1MB data you ask it to, to whichever file you ask it to. Reader simply reads a whole file in 1MB chunks. RandomReader reads however many 1MB chunks you ask it to, seeking randomly within the file between each read.

Writer

using System;
using System.IO;

public class Writer
{
    static void Main(string[] args)
    {
        Random rng = new Random();
        
        byte[] buffer = new byte[1024*1024];
        
        DateTime start = DateTime.Now;
        using (FileStream stream = new FileStream (args[0], FileMode.Create))
        {
            for (int i=0; i < int.Parse(args[1]); i++)
            {
                rng.NextBytes(buffer);
                Console.Write(".");
                stream.Write(buffer, 0, buffer.Length);
            }
        }
        DateTime end = DateTime.Now;
        Console.WriteLine();
        Console.WriteLine (end-start);
    }
}

Reader

using System;
using System.IO;

public class Reader
{
    static void Main(string[] args)
    {
        byte[] buffer = new byte[1024*1024];
        
        DateTime start = DateTime.Now;
        int total=0;
        using (FileStream stream = new FileStream (args[0], FileMode.Open))
        {
            int read;
            while ( (read=stream.Read (buffer, 0, buffer.Length)) > 0)
            {
                total += read;
                Console.Write(".");
            }
        }
        DateTime end = DateTime.Now;
        Console.WriteLine();
        Console.WriteLine (end-start);
        Console.WriteLine (total);
    }
}

RandomReader

using System;
using System.IO;

public class RandomReader
{
    static void Main(string[] args)
    {
        byte[] buffer = new byte[1024*1024];
        
        Random rng = new Random();
        DateTime start = DateTime.Now;
        int total=0;
        using (FileStream stream = new FileStream (args[0], FileMode.Open))
        {
            int length = (int) stream.Length;
            for (int i=0; i < int.Parse(args[1]); i++)
            {
                stream.Position = rng.Next(length-buffer.Length);                
                total += stream.Read (buffer, 0, buffer.Length);
                Console.Write(".");
            }
        }
        DateTime end = DateTime.Now;
        Console.WriteLine();
        Console.WriteLine (end-start);
        Console.WriteLine (total);
    }
}

I have five devices I can test: a 128MB Creative Muvo (USB), a 1GB PNY USB flash drive, a Viking 512MB SD card, my laptop hard disk (fairly standard 60GB Hitachi drive) and a LaCie 150GB USB hard disk. (All USB devices are USB 2.0.) The results are below. This is pretty rough and ready – I was more interested in the orders of magnitude than exact figures, hence the low precision given. All figures are in MB/s.

Drive Write Stream read Random read
Internal HDD 17.8 24 22
External HDD 14 20 22
SD card 2.3 7 8.3
1GB USB stick 3.3 10 10
128MB USB stick 1.9 2.9 3.5

Where possible, I tried to reduce the effects of caching by mixing the tests up, so I never ran two tests on the same location in succession. Some of the random reads will almost certainly have overlapped each other within a test, which I assume is the reason for some of the tests showing faster seek+read than streaming reads.

So, what's wrong with this picture? Why does MS claim that flash memory is much faster than hard disks, when my flash drives appear to be much slower than my laptop and external drives? (Note that laptop disks aren't noted for their speed, and I don't have a particularly fancy one.) It doesn't appear to be the USB bus – the external hard disk is fine. The 1GB stick and the SD card are both pretty new, although admittedly cheap. I doubt that either of them are worse quality than the majority of flash drives in the hands of the general public now, and I don't expect the average speed to radically increase between now and the Vista launch, in terms of what people actually own.

I know my tests don't accurately mimic how data will be accessed by Vista – but how is it so far out? I don't believe MS would have invested what must have been a substantial amount of resource into this feature without conducting rather more accurate benchmarks than my crude ones. I'm sure I'm missing something big, but what is it? And if flash can genuinely work so much faster than hard disks, why do flash cards perform so badly in simple file copying etc?

Worst product names ever?

A while ago, I decided that EasyMock.NET wasn’t quite up to scratch, and I was going to try to write a replacement. I’m not doing much .NET development at the moment, so it’s not an issue any more (and when I go back to .NET I’ll look at Rhino Mocks which sounds promising). However, I did get as far as picking a name for the new project. I settled on PowerMock in the end, but went through a pun worthy of Simon Tatham first…

Firstly, it would have to start with “N” wouldn’t it? NUnit, NCover etc… Well, what sounds a bit like mocking something, and starts with “N”. How about “NSult”, to be pronounced “insult”?

Next, suppose we wanted a new unit test system at the same time. Maybe something like TestNG but for .NET. Something to judge your code and decide whether it was good or not. Ooh, that sounds a bit like a court-room drama. How about “NJury”?

Of course, they work best when you put them together – adding NSult to NJury…

Sorry. I’ll go to bed now, promise.