Category Archives: C#

OS Jam at Google London: C# 4 and the DLR

Last night I presented for the first time at the Google Open Source Jam at our offices in London. The room was packed, but only a very few attendees were C# developers. I know that C# isn’t the most popular language on the Open Source scene, but I was still surprised there weren’t more people using C# for their jobs and hacking on Ruby/Python/etc at night.

All the talks at OSJam are just 5 minutes long, with 2 minutes for questions. I’m really not used to this format, and felt extremely rushed… however, it was still a lot of fun. I used a somewhat different approach to my slides than the normal “bullet points in PowerPoint” – and as it was only short, I thought I might as well effectively repeat the presentation here in digital form. (Apologies if the images are an inconvenient size for you. I tried a few different ones, and this seemed about right. Comments welcome, as I may do a similar thing in the future.)

First slide

Introductory slide. Colleagues forced me to include the askjonskeet.com link.

Second slide

.NET isn’t Open Source. You can debug through a lot of the source code for the framework if you agree to a “reference licence”, but it’s not quite the same thing.

Third slide

.NET isn’t Open Source, but the DLR is. And IronRuby. And IronPython. Yay!

And of course Mono is Open Source: the DLR and Mono play nicely together, and the Mono team is hoping to implement the new C# 4.0 features for the 2.8 release in roughly the same timeframe as Microsoft.

Fourth slide

This is what .NET 4.0 will look like. The DLR will be included in it, despite being open source. IronRuby and IronPython aren’t included, but depend heavily on the DLR. (Currently available versions allow you to use a “standalone” DLR or the one in .NET 4.0b1.)

C# doesn’t really depend on the DLR except for its handling of dynamic. C# is a statically typed language, but C# 4.0 has a new static type called dynamic which you can do just about anything with. (This got a laugh, despite being a simple and mostly accurate summary of the dynamic typing support in C# 4.0.)

Fifth slide

The fundamental point of the DLR is to handle call sites – decide what to do dynamically with little bits of code. Oh, and do it quickly. That’s what the caches are for. They’re really clever – particularly the L0 cache which compiles rules (about the context in which a particular decision is valid) into IL via dynamic methods. Awesome stuff.

I’m sure the DLR does many other snazzy things, but this feels like it’s the core part of it.

Sixth slide

At execution time, the relevant binder is used to work out what a call site should actually do. Unless, that is, the call has a target which implements the shadowy IDynamicMetaObjectProvider interface (winner of “biggest mouthful of a type name” prize, 2009) – in which case, the object is asked to handle the call. Who knows what it will do?

Seventh slide

Beautifully syntax-highlighted C# 4.0 source code showing the dynamic type in action. The method calls on lines 2 and 3 are both dynamic, even though in the latter case it’s just using a static method. Which overload will it pick? It all depends on the type of the actual value at execution time.

If I’d had more time, I’d have demonstrated how the C# compiler preserves the static type information it knows at compile time for the execution time binder to use. This is very cool, but would take far too long to demonstrate in this talk – especially to a bunch of non-C# developers.

Eighth slide There were a couple of questions, but I can’t remember them offhand. Someone asked me afterwards about how all this worked on non-.NET implementations (i.e. Mono, basically). I gather the DLR itself works, but I don’t know whether C# code compiled in the MS compiler will work at the moment – it embeds references to binder types in Microsoft.CSharp.dll, and I don’t know what the story is about that being supported on Mono.

This is definitely the format I want to use for future presentations. It’s fun to write, fun to present, and I’m sure the “non-professionalism” of it makes it a lot more interesting to watch. Although it’s slower to create text-like slides (such as the first and the last one) this way, the fact that I don’t need to find clip-art or draw boxes with painful user interfaces is a definite win – especially as I’m going to try to be much more image-biased from now on. (I don’t want people reading slides while I’m talking – they should be listening, otherwise it’s just pointless.)

Dynamic type inference and surprising possibilities

There have been mutterings about the fact that I haven’t been blogging much recently. I’ve been getting down to serious work on the second edition of C# in Depth, and it’s taking a lot of my time. However, I thought I’d share a ghastly little example I’ve just come up with.

I’ve been having an email discussion with Sam Ng, Chris Burrows and Eric Lippert about how dynamic typing works. Sam mentioned that even for dynamically bound calls, type inference can fail at compile time. This can only happen for type parameters where none of the dynamic values contribute to the type inference. For example, this fails to compile:

static void Execute<T>(T item, int value) where T : struct {}

dynamic guid = Guid.NewGuid();
Execute("test", guid);

Whatever the value of guid is at execution time, this can’t possibly manage to infer a valid type argument for T. The only type which can be inferred is string, and that’s not a value type. Fair enough… but what about this one?

static void Execute<T>(T first, T second, T third) where T : struct {}

dynamic guid = Guid.NewGuid();
Execute(10, 0, guid);
Execute(10, false, guid);
Execute("hello", "hello", guid);

I expected the first call to compile (but fail at execution time) and the second and third calls to fail at compile time. After all, T couldn’t be both an int and a bool could it? And then I remembered implicit typing… what if the vaue of guid isn’t actually a Guid, but some struct which has an implicit conversion from int, bool and string? In other words, what if the full code actually looked like this:

using System;

public struct Foo
{
    public static implicit operator Foo(int x)
    {
        return new Foo();
    }

    public static implicit operator Foo(bool x)
    {
        return new Foo();
    }

    public static implicit operator Foo(string x)
    {
        return new Foo();
    }
}

class Test
{
    static void Execute<T>(T first, T second, T third) where T : struct {}

    static void Main()
    {
        dynamic foo = new Foo();
        Execute(10, 0, foo);
        Execute(10, false, foo);
        Execute("hello", "hello", foo);
    }
}

Then T=Foo is a perfectly valid inference. So yes, it all compiles – and the C# binders even get it all right at execution time. So much for any intuition I might have about dynamic typing and inference…

No doubt I’ll have similar posts about new C# 4 features occasionally… but they’re more likely to be explanations of misunderstandings than deep insights into a correct view of the language. Those end up in the book instead :)

A different approach to inappropriate defaults

I’ve had a couple of bug reports about my Protocol Buffers port – both nicely detailed, and one including a patch to fix it. (It’s only due to my lack of timeliness in actually submitting the change that the second bug report occurred. Oops.)

The bug was in text formatting (although it also affected parsing). I was using the default ToString behaviour for numbers, which meant that floats and doubles were being formatted as "50,15" in Germany instead of "50.15". The unit tests caught this, but only if you ran them on a machine with an appropriate default culture.

Aaargh. I’ve been struggling with a similar problem in a library I can’t change, which uses the system default time zone for various calculations in Java. When you’re running server code, the default time zone is almost never the one you want to use, and it certainly isn’t in my case.

A similar problem is Java’s decision to use the system default encoding in all kinds of bizarre places – FileReader doesn’t even let you specify the encoding, which makes it almost entirely useless in my view.

So I’ve been wondering how we could fix this and problems like it. One option is to completely remove the defaults. If you always had to pass in a CultureInfo/Locale, TimeZoneInfo/TimeZone, Encoding/Charset when you call any method which might be culturally sensitive.

Making life easier (in .NET)

It strikes me that .NET has a useful abstraction here: the assembly as the unit of deployment. (Java’s closest equivalent is probably a jar file, which probably gets messier.)

Within one assembly, I suspect in many cases you always want to make the same decision. For example, in protocol buffers I would like to use the invariant culture all the time. It would be nice if I could say that, and then get the right behaviour by default. Here are the options I’d like to be able to apply (for each of culture, time zone and character encoding – there may be others):

  • Use a culture-neutral default (the invariant culture, UTF-8, UTC)
  • Use a specific set of values (e.g. en-GB, Windows-1252, "Europe/London")
  • Use the system default values
  • Use whatever the calling assembly is using

Of course you should still have the option of specifying overrides on a per call basis, but I think this might be a way forward.

Thoughts? I realise it’s almost certainly too late for this to actually be implemented now, but would it have been a good idea? Or is it just an alternative source of confusion?

Language proliferation

I’ve always been aware that .NET supports multiple languages (obviously) and that Microsoft has been experimenting with this to some extent. It’s only recently struck me just to what extent this is the case though.

Here’s a list – almost certainly incomplete – of .NET languages from Microsoft alone.

Some of these are research languages which are more important for the ideas they’ve contributed to more mainstream ones at a later date than for anything else – but there’s still a lot of effort represented in the list.

In addition, there are third party languages targeting .NET, such as Boo, IronScheme and Scala. (Wikipedia lists loads of them.)

Now, think back to the time before .NET. Was Microsoft actively experimenting with languages back then? Plenty of people were trying things against the JVM, but Sun was pretty much absent from that party. .NET seems to be a "missing ingredient" that has allowed smart folk at Microsoft to let their imaginations loose in ways which they couldn’t previously. (Of course, not everyone in the language business at MS started there: Jim Hugunin was hired by Microsoft precisely because of his work on IronPython.)

I wonder how long this will continue.

Tower of Babel, or land of polyglots?

What does this mean for the average developer? Currently, if you’re writing a non-web application in .NET, you really only need to know a single language – and any of them will do. (Plus potentially SQL of course…) Compare this with web developers who have to be intimately familiar with HTML, CSS and JavaScript – and the differences between various implementations.

How long will it be before backend developers are expected to know a dynamic language, a static OO language and a functional language? Does the benefit of mixing several languages in a project worth the impedance mismatch and the increased skillset requirements? I’m not going to make any predictions on that front – I can certainly see the benefits of each of these approaches in certain situations. They’ve been designed to play well together, but there are bound to be limitations and oddities: times when you need to change how you write your F# so that it’s easily callable from C#, for example.

Whether or not you learn multiple languages to a professional level is one thing, but becoming familiar with them is a different matter. In the course of co-authoring Functional Programming for the Real World (where "co-author" is a bit of a stretch title – I’ve played more of an editorial role really, with the added bonus of picking on Tomas whenever I felt he was perhaps a little harsh towards C#) I’ve learned to appreciate many of F#’s qualities, but I don’t really know the language. If someone asked me to write a complete application in it (rather than just a toy experiment) I’d be reaching for books every other minute. I hope I’ll learn more over the course of time, but I doubt that I’ll ever be sufficiently experienced in it to put it on my CV. The same goes for IronPython, although I’m considerably more likely to need Python at work than I am F#. (Python is one of the three "approved" languages at Google, along with Java and C++.) None of this means that time spent in these languages is wasted: I’ll be able to apply a lot of what I’ve learned about F# to my C# coding, even if it will make me pine for things like pattern matching and asynchronous workflows periodically.

I think it’s pretty much a given that these days we all need to bring a wide range of technologies to bear in most jobs. While it used to be just about feasible in the .NET 1.1 days to have a pretty good grasp of all the major aspects (ASP.NET for sites and web services, ADO.NET, WinForms, Windows services, class libraries, interop) it’s just impossible these days. We learn something new when we need to – but usually against the background of a familiar language. How well would we cope if we had to learn whole new languages (to the level of being able to use them for production code) as often as we have to learn new libraries?

This worries me a little. I’m pleased to see that C# 4 is a much smaller change than the previous versions were. Admittedly I’d rather have had immutability support than dynamic, but that’s just me… and that’s the problem, too. While I worry about our ability to actually learn everything that’s becoming available, it’s all good stuff. Can there be "too much of a good thing"?

What I really don’t want to see is developers having to know multiple languages, and everyone knowing them poorly. I’m a big believer in having a thorough understanding of your language, so that even if everything else is new, you can rely on your understanding of that aspect of your code. It would be a shame if the pressure of knowing many languages turned many of us into cargo cult programmers. The utopia would be for us all to turn into language renaissance developers. I suspect the reality will be somewhere between the two.

Still, as long as I get to keep helping authors write about languages I know almost nothing about, I’m sure I’ll be happy…

RFC: C# in Depth 2nd edition, proposed changes and additions

As I’ve mentioned in passing before now, I’ve started working on the 2nd edition of C# in Depth, to roughly coincide with the release of C# 4 and Visual Studio 2010. So far I’ve just been thinking about what should be added and what should be changed or removed. That’s what I’d like to share today, and I’m really eager for feedback. In particular, I’d like to think of three audiences:

  • Developers (possibly hobbyists) who have read one introductory C# book (Head First C#, Microsoft Visual C# 2008 Step by Step or something similar) and are now looking to follow it up. I’m thinking of people with less than a year of full-time development in C#, and not a lot of experience in similar languages. Is C# in Depth too “hard” for this audience? If so, should I try to make it more accessible for them, or is the material inherently too advanced? What might attract such a person to the book in terms of the table of contents?
  • Reasonably experienced developers who know C# 2 and 3 already, but haven’t read C# in Depth. They may have heard of it before (I’ve been very pleased in terms of word-of-mouth for the first edition, so thanks to everyone who’s contributed to that!) but not had much real reason to buy another book when they already have something like C# 3.0 in a Nutshell. Now that C# 4 is around, they may be interested in buying something which will explain the new features, and they might as well get a different book rather than just a fresh edition of something they already own.
  • Existing readers of C# in Depth. If you already own the first edition, what might persuade you to buy the second edition? How much “new” stuff is required? Would you rather get “more bang for the buck” from a significantly thicker book, or does the relative slimness of C# in Depth hold real appeal? (I know I’m a fan of slim books, but I don’t know quite how important it is. The second edition certainly will be thicker than the first as I’m unlikely to remove much, but I’m sure there’s scope for varying just how much we add.)

Here’s a draft table of contents – only down to headings within a chapter, but with some notes. Changes and notes are highlighted in blue.

Part 1: Preparing for the journey

Chapter 1: The changing face of C# development
  • Evolution in action: examples of code change
    I won’t be able to include the C# 4 features by just evolving the existing code… at least, not the big ones. Options:
    • Stick with the existing example for 1 to 2 and 2 to 3, but change to a different problem for 4
    • Try to work out a different example to show all of 1 to 2, 2 to 3 and 3 to 4
    • Stick with the existing example, but take it in a different direction for 4, rather than just rewriting the existing code
  • A brief history of C# (and related technologies)
    Update this with what’s been happening since 2008 and possibly slim down what’s already there
  • The .NET platform
    Add the new versions, including .NET 3.5 SP1, and introduce the DLR in the terminology part.
  • Fully functional code in snippet form
    Nothing new to add about Snippy, but there may be another tool to talk about as well.
  • Summary
Chapter 2: Core foundations: building on C# 1
  • Delegates
  • Type system characteristics
  • Value types and reference types
  • Beyond C# 1: new features on a solid base
    Introduce interface/delegate variance and the dynamic type.
  • Summary

Part 2: Solving the issues of C# 1

Chapter 3: Parameterized typing with generics

  • Why generics are necessary
  • Simple generics for everyday use
  • Beyond the basics
    Add a tip and example about using generic static methods in a non-generic type to take advantage of type inference when creating instances of generic types.
  • Advanced generics
  • Generic collection classes
    It would probably make sense to include HashSet<T> here, and possibly any new collection classes in .NET 4.0. Version warnings would be given, of course!
  • Limitations of generics in C# and other languages
    Refer to part 4 in terms of variance, and possibly trim down the coverage of alternative approaches.
    Add an example of Marc Gravell’s work for generic operators, or at least refer to it. Possibly mention the idea of
    static interfaces. Both give food for thought, but aren’t really part of C#.
  • Summary
Chapter 4: Saying nothing with nullable types
  • What do you do when you just don’t have a value?
  • System.Nullable<T> and System.Nullable
  • C# 2’s syntactic sugar for nullable types
  • Novel uses of nullable types
    Possibly include Marc Gravell’s teaser about calling all the Object methods on a null value.
  • Summary
    Make sure the summary is typeset as 4.5 instead of as part of 4.4. Oops!
Chapter 5: Fast-tracked delegates
  • Saying goodbye to awkward delegate syntax
  • Method group conversions
  • Covariance and contravariance
    Mention generic delegate variance in C# 4, referring to part 4.
  • Inline delegate actions with anonymous methods
  • Capturing variables in anonymous methods
  • Summary
Chapter 6: Implementing iterators the easy way
  • C# 1: the pain of handwritten iterators
  • C# 2: simple iterators with yield statements
  • Iteration in the real world
    Renamed section, but keep first example – possibly make it shorter.
    Add iteration over lines in a file.
    Add generation iterator, referring to LINQ.
  • Pseudo-synchronous code with the Concurrency and Coordination Runtime
    Fix sample code and explanation, with full article on the web
  • Summary
Chapter 7: Concluding C# 2: the final features
  • Partial types
  • Static classes
  • Separate getter/setter property access
  • Namespace aliases
  • Pragma directives
  • Fixed-size buffers in unsafe code

Part 3: New title TBD

Chapter 8: Cutting fluff with a smart compiler
  • Automatically implemented properties
  • Implicit typing of local variables
  • Simplified initialization
  • Implicitly typed arrays
  • Anonymous types
  • Summary
Chapter 9: Lambda expressions and expression trees
  • Lambda expressions as delegates
  • Simple examples using List<T> and events
  • Expression trees
    Expand for new expression support in .NET 4.0 (clearly labeled as such)
  • Changes to type inference and overload resolution
    Check if/how this has changed in C# 4.0 spec
  • Summary

 

Chapter 10: Extension methods
  • Life before extension methods
  • Extension method syntax
  • Extension methods in .NET 3.5
  • Usage idea and guidelines
    Update advice here and give some more examples.
  • Summary
Chapter 11: Query expressions and LINQ to Objects

Look at changing how the diagrams are formatted in this chapter.

  • Introducing LINQ
  • Simple beginnings: selecting elements
  • Filtering and ordering a sequence
  • Let clauses and transparent identifiers
  • Joins
  • Groupings and continuations
  • New section: extending LINQ to Objects
    • Advice for writing your own extension methods
    • Plug for MoreLINQ and other libraries
  • Summary
Chapter 12: LINQ beyond collections
  • LINQ to SQL
    Despite the rumours of its doom, I think this is the best place to start.
  • Translations using IQueryable and IQueryProvider
  • LINQ to DataSet
  • LINQ to XML
  • LINQ to Entities
    Again, not much depth – still a whirlwind tour.
  • Third-party LINQ (renamed from LINQ beyond .NET 3.5)
    • Update state of play with third-party providers
    • Example of PushLINQ – still using simple delegates, but against different interface
    • Remove Parallel Extensions – moved to part 4
  • Summary
Chapter 13: Removed (new chapter at end of part 4)

Part 4: C# 4 (full title TBD)

Chapter 13: Dynamic binding in a static language
  • Introduction to the DLR, IronPython, IronRuby
  • Calling dynamically – the dynamic keyword
  • Reacting dynamically – implementing IDynamicObject
  • Applications for dynamic code
  • Summary
Chapter 14: More minor tweaks
  • Named and optional arguments
  • Variance of interfaces and generic delegates
  • Simplifying COM interoperability
  • Summary
Chapter 15: Major new features of .NET 4.0

Even though these don’t affect the language directly, they change the “shape” of code, the way we think about problems. I’m hoping there will be more…

  • Bullet-proofing your code with Code Contracts
  • Simplifying concurrency with Parallel Extensions
  • F# (?)
  • Summary
Chapter 16: Whither now?
  • Educated guesses about C# 5
  • The Renaissance developer
  • Until we meet again
Appendix: LINQ standard query operators

So, how does that sound? What else needs changing from the first edition? Are there any particular sections which would benefit from a neat example? Have I missed anything important that you’d want to see in part 4?

Right now I’ve got a lot of flexibility – so please, I’d much rather hear ideas now than later.

 

Benchmarking IO: buffering vs streaming

I mentioned in my recent book review that I was concerned about a recommendation to load all of the data from an input file before processing all of it. This seems to me to be a bad idea in an age where Windows prefetch will anticipate what data you need next, etc – allowing you to process efficiently in a streaming fashion.

However, without any benchmarks I’m just guessing. I’d like to set up a benchmark to test this – it’s an interesting problem which I suspect has lots of nuances. This isn’t about trying to prove the book wrong – it’s about investigating a problem which sounds relatively simple, but could well not be. I wouldn’t be at all surprised to see that in some cases the streaming solution is faster, and in other cases the buffered solution is faster.

The Task

The situation presented is like this:

  • We have a bunch of input files, either locally or on the network (I’m probably just going to test locally for now)
  • Each file is less than 100MB
  • We need to encrypt each line of text in each input file, writing it to a corresponding output file

The method suggested in the book is for each thread to:

  1. Load a file into a List<string>
  2. Encrypt every line (replacing it in the list)
  3. Save to a new file

My alternative option is:

  1. Open a TextReader and a TextWriter for the input/output
  2. Repeatedly read a line, encrypt, write the encrypted line until we’ve exhausted the input file
  3. Close both the reader and the writer

These are the two implementations I want to test. I strongly suspect that the optimal solution would involve async IO, but doing an async version of ReadLine is a real pain for various reasons. I’m going to keep it simple – using plain threading, no TPL etc.

I haven’t written any code yet. This is where you come in – not to write the code for me, but to make sure I test in a useful way.

Environmental variations

My plan of attack is to first write a small program to generate the input files. These will just be random text files, and the program will have a few command line parameters:

  • Directory to put files under (one per test variation, basically)
  • Number of files to create
  • Number of lines per file
  • Number of characters per line

I’ll probably test a high and a low number for each of the last three parameters, possibly omitting a few variations for practical reasons.

In an ideal world I’d test on several different computers, locally and networked, but that just isn’t practical. In particular I’d be interested to see how much difference an SSD (low seek time) makes to this test. I’ll be using my normal laptop, which is a dual core Core Duo with two normal laptop disks. I may well try using different drives for reading and writing to see how much difference that makes.

Benchmarking

The benchmark program will also have a few command line parameters:

  • Directory to read files from
  • Directory to write files to
  • Number of threads to use (in some cases I suspect that more threads than cores will be useful, to avoid cores idling while data is read for a blocking thread)
  • Strategy to use (buffered or streaming)
  • Encryption work level

The first three parameters here are pretty self-explanatory, but the encryption work level isn’t. Basically I want to be able to vary the difficulty of the task, which will vary whether it ends up being CPU-bound or IO-bound (I expect). So, for a particular line I will:

  • Convert to binary (using Encoding.ASCII – I’ll generate just ASCII files)
  • Encrypt the binary data
  • Encrypt the encrypted binary data
  • Encrypt the encrypted encrypted […] etc until we’ve hit the number given by the encryption work level
  • Base64 encode the result – this will be the output line

So with an encryption work level of 1 I’ll just encrypt once. With a work level of 2 I’ll encrypt twice, etc. This is purely for the sake of giving the computer something to do. I’ll use AES unless anyone has a better suggestion. (Another option would be to just use an XOR or something else incredibly simple.) The key/IV will be fixed for all tests, just in case that has a bearing on anything.

The benchmarking program is going to be as simple as I can possibly make it:

  • Start a stopwatch
  • Read the names of all the files in the directory
  • Create a list of files for each thread to encrypt
  • Create and start the threads
  • Use Thread.Join on all the threads
  • Stop the stopwatch and report the time taken

No rendezvous required at all, which certainly simplifies things. By creating the work list before the thread, I don’t need to worry about memory model issues. It should all just be fine.

In the absence of a better way of emptying all the file read caches (at the Windows and disk levels) I plan to reboot my computer between test runs (which makes it pretty expensive in terms of time spent – hence omitting some variations). I wasn’t planning on shutting services etc down: I really hope that Vista won’t do anything silly like trying to index the disk while I’ve got a heavy load going. Obviously I won’t run any other applications at the same time.

If anyone has any suggested changes, I’d be very glad to hear them. Have I missed anything? Should I run a test where the file sizes vary? Is there a better way of flushing all caches than rebooting?

I don’t know exactly when I’m going to find time to do all of this, but I’ll get there eventually :)

Breaking Liskov

Very recently, Barbara Liskov won the Turing award, which makes it a highly appropriate time to ponder when it’s reasonable to ignore her most famous piece of work, the Liskov Substitution (or Substitutability) Principle. This is not idle speculation: I’ve had a feature request for MiscUtil. The request makes sense, simplifies the code, and is good all round – but it breaks substitutability and documented APIs.

The substitutability principle is in some ways just common sense. It says (in paraphrase) that if your code works for some base type T, it should be able to work with subtype of T, S. If it doesn’t, S is breaking substitutability. This principle is at the heart of inheritance and polymorphism – I should be able to use a Stream without knowing the details of what its underlying storage is, for example.

Liskov’s formulation is:

Let q(x) be a property provable about objects x of type T. Then q(y) should be true for objects y of type S where S is a subtype of T.

So, that’s the rule. Sounds like a good idea, right?

Breaking BinaryReader’s contract

My case in point is EndianBinaryReader (and EndianBinaryWriter, but the arguments will all be the same – it’s better to focus on a single type). This is simply an equivalent to System.IO.BinaryReader, but it lets you specify the endianness to use when converting values.

Currently, EndianBinaryReader is a completely separate class to BinaryReader. They have no inheritance relationship. However, as it happens, BinaryReader isn’t sealed, and all of the appropriate methods are virtual. So, can we make EndianBinaryReader derive from BinaryReader and use it as a drop-in replacement? Well… that’s where the trouble starts.

There’s no difficulty technically in doing it. The implementation is fairly straightforward – indeed, it means we can drop a bunch of methods from EndianBinaryReader and let BinaryReader handle it instead. (This is particularly handy for text, which is fiddly to get right.) I currently have the code in another branch, and it works fine.

And I would have gotten away with it if it weren’t for that pesky inheritance…

The problem is whether or not it’s the right thing to do. To start with, it breaks Liskov’s substitutability principle, if the “property” we consider is “the result of calling ReadInt32 when the next four bytes of the underlying stream are 00, 00, 00, 01” for example. Not having read Liskov’s paper for myself (I really should, some time) I’m not sure whether this is the intended kind of use or not. More on that later.

The second problem is that it contradicts the documentation for BinaryReader. For example, the docs for ReadInt32 state: “BinaryReader reads this data type in little-endian format.” That’s a tricky bit of documentation to understand precisely – it’s correct for BinaryReader itself, but does that mean it should be true for all subclasses too?

When I’ve written in various places about the problems of inheritance, and why if you design a class to be unsealed that means doing more design work, this is the kind of thing I’ve been talking about. How much detail does it make sense to specify here? How much leeway is there for classes overriding ReadInt32? Could a different implementation read a “compressed” Int32 instead of always reading four bytes, for example? Should the client care, if they make sure they’ve obtained an appropriate BinaryReader for their data source in the first place? This is basically the same as asking how strictly we should apply Liskov’s substitutability principle. If two types are the same in every property, surely we can’t distinguish between them at all.

I wonder whether most design questions of inheritance basically boil down to defining which properties should obey Liskov’s substitutability principle and which needn’t, for the type you’re designing. Of course, it’s not just black and white – there will always be exceptions and awkward points. Programming is often about nuance, even if we might wish that not to be the case.

Blow it, let’s do it anyway…

Coming back to BinaryReader, I think (unless I can be persuaded otherwise) that the benefits from going against the documentation (and strict substitutability) outweigh the downsides. In particular, BinaryReaders don’t tend to be passed around in my experience – the code which creates it is usually the code which uses it too, or it’s at least closely related. The risk of breaking code by passing it a BinaryReader using an unexpected endianness is therefore quite low, even though it’s theoretically possible.

So, am I miles off track? This is for a class library, after all – should I be more punctilious about playing by the rules? Or is pragmatism the more important principle here?

Book Review: C# 2008 and 2005 Threaded Programming: Beginner’s Guide

Note: The author of this book has requested that I remove their name from this blog post. I have done so in accordance with their wishes, editing comments as well.

Update (19th March 2009)

Debate around this review is getting heated. I stand by all the points I make about the text, but I’d like to clarify a few things:

  • If there are any ad hominem comments in the review against the author, please ignore them. I’m going to try to weed out any that I find, but if you spot one, please let me know and then ignore it. I feel very strongly that a review should be about the text of a book, not about its author. The text is what will inform the reader, not the author’s other work. I’m aware that the author has written many other books, and is generally well-regarded (as far as I can tell, anyway). That neither helps nor hinders the text. The same goes for me and the review, of course. Whether you know me or not, whether you’ve read anything else I’ve written or not, the review should stand on its own merits. This is not a popularity contest – it’s a discussion about a technical book.
  • The impression I’ve given in the review is almost entirely negative. This is because that’s the impression I received as a reader interested in accuracy and best practices. That does not mean that the book is entirely inaccurate – far from it. There are plenty of aspects where I have no particular issues with the accuracy. (The code style is more uniformly disagreeable to me, but that’s a subjective matter.) However, there is enough inaccuracy (and bad practice, in my view – somewhat subjective, but less so than the code style) to make that the dominant impression left with me, alongside my surprise that there’s no proper discussion of locking. As an analogy, imagine you go to a choral concert. Suppose the sopranos, altos and tenors are all perfectly in tune, but the basses are out of key the whole time. In some senses the concert would be 75% accurate – but the 25% inaccuracy would be enough to ruin it. So it is with technical books (not just this one) – it only takes a relatively small degree of inaccuracy to make the difference between a good book and a bad one. The bottom line is: even I don’t think everything or even most of what’s written in the book is wrong; there are enough problems to make me dislike it though.
  • I’ve made a few minor edits to the review just now, to address a few comments made so far. If some of the comments appear to be odd, that may be why!

Resources

  • Publisher’s page (Packt) – this is the cheapest way to buy the book as far as I can see
  • Sample code (49MB download! Mostly because it contains bin/obj directories for all solutions…)
  • Amazon or Barnes and Noble links if you don’t want to buy it directly
  • John Mueller’s review – a much more positive review than this one, which may prove an interesting counterbalance for readers. (Thanks to Erik for pointing out John’s review in the comments.)

Disclaimer

This book doesn’t really compete with C# in Depth, but obviously the very fact that it’s another book about C# at all means I’m probably not entirely unbiased. Arguably it also “competes” with my own (somewhat out of date now) threading article, although that’s not a monetary venture for me. I should also point out that my copy was sent to me for free, specifically for review, by Packt Publishing.

Audience and content

The book claims that “Where you are a beginner to working with threads or an old hand who is looking for a reference, this book should be on your desk.” In practice, I don’t think it’s really suitable as a reference. The kind of information you really want as a reference is hard to find amidst the bulk of the book, which is on-going examples. For the rest of this review I’ll regard the intended audience as just beginners.

The first chapter (out of 12; at 388 pages one of the nice things about this books it that it’s relatively slim) is introductory material about threads and processes, and why concurrency is important in the first place. After this one code-free chapter, the rest of the book is all example-based. The pattern goes something like this:

  • Give rough idea of what we’re building
  • Create first version of the code
  • Explain what it does and why it may not be ideal
  • Improve it
  • Explain how the improvements work
  • Move to next example or add new major feature

That sounds all very well, but I’ll get to my issues with it in a minute. Although the examples are constantly evolving, they essentially break down into these applications:

  • “Code cracking” (brute-forcing a 4 character code)
  • “Encrypting” SMS messages (not real encryption – no key – but a general CPU-intensive transformation)
  • Image processing to find and highlight “old stars” in NASA images
  • “Encrypting” several files
  • More image processing – adjusting the brightness of a large image and thumbnailing it

These are all Windows Forms applications, and are frankly pretty similar, all basically dealing with simple, embarrassingly parallel tasks. That’s not to say the author doesn’t get a fair set of different techniques and lessons out of them:

  • Keeping the UI thread free (and seeing what happens when you don’t)
  • Tips for debugging multi-threaded apps in Visual Studio
  • Showing the performance for individual processes using Task Manager and Windows Explorer
  • Using BackgroundWorker to update the UI
  • Queuing tasks in the system thread pool
  • Creating new threads explicitly
  • Using Control.Invoke/BeginInvoke to update the UI (although this comes very late in the book – chapter 10 out of 12)
  • Keeping tasks independent
  • Noting that sharing data between threads is difficult – but coming to the wrong conclusions (more later)
  • Using the Timer component (just the WinForms timer; not System.Threading.Timer, System.Web.UI.Timer or System.Timers.Timer) – although later on he uses a BackgroundWorker for a task much more suitable for a Timer.
  • A bit of OO design, although in a pretty botched way – the idea of having a general-purpose “parallel algorithm” class and a “parallel algorithm piece” class is reasonable, but it isn’t handled nearly as well as it might be
  • Fairly disastrous advice (IMO) about both I/O and the GC
  • Exception “handling” (where “swallowing exceptions and just reporting them with Debug.Print” counts as “handling” apparently)
  • Parallel Extensions from .NET 4.0, with both PLINQ and TPL

Unfortunately, this misses out some of the most important concepts in parallelism on .NET. The author frequently mentions locking, but only ever in a “we’re avoiding doing it” way. I find it absolutely incredible that a book on multi-threading in C# doesn’t even mention the “lock” keyword. Okay, it’s nice to be able to split tasks up completely independently where possible, but in the real world you sometimes have to use shared mutable state (or at least, it’s often the simplest approach).

When I first got the book, I looked up several entries in the index to see how they’d be handled. I was shocked to find that none of these have an index entry:

  • lock
  • volatile
  • memory model
  • Monitor
  • Wait or Pulse
  • BeginInvoke or Invoke
  • double-checked locking
  • mutable or immutable

The concept of accessing state from multiple threads is glossed over for the entirety of the book. Basically whenever multiple threads want to make their results available, they put them in different elements of an array or list. There’s an assumption that if you read from that array/list in a different thread, it’s all okay. Likewise there’s an assumption that it’s appropriate to read integer variables written to in one thread from another thread without any locking, volatility or use of the Interlocked class. I’ll come back to this topic when I tackle accuracy later on.

Style

This is a very informal book: something I have no problem with. English clearly isn’t the author’s first language, and although I don’t blame him for some of the clumsy wording in the book (e.g. “We will not leave behind the necessary pragmatism in order to improve performance within a reasonable developing time”) I do wish the book’s editorial team had done a better job in that respect. It’s tricky with technical books: non-technical editors have good reason to be wary of going too far, as small changes in wording can have make a large difference semantically, but it does make a big difference to a book’s readability when the language is clear and idiomatic. (As a side note, I feel incredibly fortunate to have English as my native tongue. I’m not fluent in any foreign languages, and I’m often amazed at how well others manage.)

There are other elements of the style of the book which I have much more of a problem with. The first is the way that the examples are handled. A very large proportion of the book is just lists of instructions: “add some using directives: <code>; add these variables: <code>; add this procedure: <code>; add another procedure <code>; add an event handler <code>” with just a sentence or two of explanation for each one as you go. There’s much more explanation after all the code has been added, but the way that the code is given makes it very hard to see what’s going on. We almost never get to see a whole class in one listing – it’s always broken up into using directives, variables, individual methods etc. This may not be too bad if you’re following along with the book at every single point, but it makes it very hard to just read. As a friend has commented, this content might work a lot better as a screencast, rather than as a book.

One detailed gripe: nearly every time a property is introduced, the author uses the phrase “we want to create a compact and reliable class” as a justification. There’s no explanation, and quite often the properties are mutable for no good reason (when a genuinely reliable class in a multi-threaded setting would be immutable). After a while it made me want to grind my teeth every time I saw it.

The feeling is very much that of a Head First book, but one which doesn’t work. For all my misgivings about Head First C# (which I believe is now very much better now that a large number of errors have been removed) the general style was very well handled. It’s not my preferred style to start with (particularly focusing on large GUIs instead of short, complete console apps) but I rarely felt particularly lost in the listings – there was usually enough context to hold onto. Here, I feel there’s very little context at all. If you accidentally miss out a step, you’ll have a really hard time working out which one it is or what’s wrong.

On top of this, there’s the bizarre storyline “explaining” all the listings. Apparently you (the reader) originally started out cracking a code, then got hired by some other crackers, then the FBI, then NASA. We are told of FBI agents getting us capuccinos, the NASA CIO wanting you to use the Parallel Extensions CTP so that they can get free licences for Visual Studio 2010 and all kinds of other oddities. We are constantly bombarded with plaudits about our threading capabilities – by the last chapter we’re regularly being called “experts” and “threading gurus” despite the fact that we wouldn’t have a clue what was going on if someone presented us with some code using a “lock” statement. This is all patronising in the extreme – and again, Head First C# (and  I suspect the rest of the Head First series) handles the “keep it informal but drive the topic forward” aspect a lot more successfully.

Finally, on the topic of style, I’d like to rant a bit about the coding style. It’s awful. Really awful. I realise that coding standards are to some extent a personal thing, but I object to code like this:

  • Pseudo-Hungarian (the type which uses “o” as a prefix for almost any object; not the type Peter likes) and the nature of every variable (local, parameter or instance variable) makes for horrendous variable names such as “prloOutputCharLabels”. It’s not even consistent – variables added by the designer only get a type designation prefix (lbl, but, pic) but no nature prefix. Aargh.
  • Methods are frequently camel-cased instead of Pascal-cased, e.g. “showFishes” and “checkCodeChar”. It’s possible that this is only true for private methods – a very quick flick through doesn’t reveal any public ones like this – but if so it’s inconsistently applied as there are certainly Pascal-cased private methods too. Some public properties combine both annoyances so far, with names such as “poThread” and “piBegin”.
  • Most (but not all) of the time the author declares all of a method’s variables at the top, even if they’re not used for a long time. This includes declarations of variables for use in loops. This took me right back to the 80s, writing ANSI C again. I believe that the ability to declare variables at the point of first use gives a significant improvement in readability. It’s easier to see where a variable will be used if its scope is limited, for example.
  • Using directives aren’t applied nearly thoroughly enough, leaving lots of explicit use of System.Diagnostics, System.Drawing, System.ComponentModel etc. Given the line length limitations in printed books, this is a real killer in terms of providing compact, readable code.
  • Speaking of line length limitations, it would be really useful to actually acknowledge them – if a comment is going to span two printed lines, starting just the first one with “//” and leaving the second indented but not really a comment isn’t a good idea.

So, we’ve got code broken up into chunks which breaks the flow of the code, and I don’t even like the style of the code. Still, I could live with that if it’s good quality code…

Accuracy and best practices

I’ve already indicated one of the significant problems I have with the book in terms of content: its complete absence of discussion about shared data and locking. Yes, this is a beginner’s book, and I wasn’t expecting the level of detail on the memory model which is present in Joe Duffy’s book (which I promise I’ll review soon) – but I’d certainly prefer to err on the side of safety. The book regularly just accesses data on one thread having written it on another, with no locking, volatility or use of Interlocked. This isn’t the sole bad practice, however, and it’s not limited to stylistic choices either. In the course of the book, we are told all of the items below (and more). Italics indicate what the book claims; regular type indicates my response. These aren’t verbatim quotes, but paraphrase:

  • Forcing garbage collection before starting a multi-threaded operation is a good idea. This is given as a sort of response to a screenshot of Process Explorer showing ugly memory usage. In fact, I can’t reproduce the kind of nasty graph that’s shown in the book, even with the code downloaded from the web site, but if I did see that there are definitely better ways of addressing it than forcing garbage collection. Disposing of Bitmaps appropriately would be a good start… as it is, each bitmap is going to hang around for at least one garbage collection cycle longer than it should, because we’ve got to wait for its finalizer to be executed. Making sure you dispose of objects appropriately is always a good idea – explicitly forcing the garbage collector is almost always a bad one. (Not absolutely always, but usually.)
  • WaitHandle.WaitAll has to run on an MTA thread – so let’s just change the [STAThread] line above the Main method to [MTAThread], with no warning that it’s a really bad idea to do that for Windows Forms. (Side-note: when trying to check that there really isn’t a warning, I had to spend a long time finding the section. The index doesn’t contain entries for MTAThread, STAThread, apartment, WaitHandle or WaitAll. In general the index could do with a lot of work. I’m painfully aware that indexing is a horrible task, but it’s important.)
  • Application.DoEvents() is a way of letting the UI process events. This is true – it’s also another really bad idea unless you absolutely have to use it. Re-entrancy is hard to debug – and not mentioned at all in the book, as far as I can tell.
  • Data streaming is wasteful, because two threads might both want to do I/O at the same time – it’s a better idea for each thread to load all the data it needs to and then start processing it. This is stated in a context where streaming is ideal – each thread just needs to process every line in a file. (Each thread is asked to process a different file.) There’s no dependency between the lines of the file. It’s an absolute gift – the buffering and pre-fetch techniques of Windows would guess we needed the next block of data before we ask for it, so the disk would be seeking while we’re encrypting, on each thread. At least, I strongly suspect it would – and I would profile the thing instead of just claiming that we’ve managed to avoid an I/O bottleneck by loading files in their entirety up-front. No mention is made of the fact that as soon as a bunch of big files are queued for encryption, you’ll have a bunch of threads all trying to load everything before they bother starting to do anything. Avoiding I/O contention is a tricky topic, and it deserves better than a couple of misleading paragraphs with no attempt at explaining what the benefits of streaming the data would be.
  • The thread pool is used to queue threads with work to do. If there are already lots of threads busy, the new threads will wait until the old ones have finished. Note the use of “threads” here – not tasks to run on a pool of existing threads, but threads. This would make the thread pool pointless – what is never explained in the book is that creating threads is a relatively expensive business, and you don’t want to do it repeatedly for short-lived tasks when you could instead create a pool of threads and reuse them to run several tasks. Once this purpose is clear, the notion of queuing threads becomes obviously wrong.
  • We can pass some state into the delegate used for work item queuing (or a ParameterizedThreadStart) and use that to give us some context. We need to cast that state to the relevant type before we can use it, because it’s just typed as System.Object. So far so good – except most of the time, the author ends up passing into the work item the same reference which would be available as just “this” within the method itself. So we have code such as:
loPiece.poThread = new Thread(new ParameterizedThreadStart(loPiece.ThreadMethod));

loPiece.poThread.Start(loPiece);
  • The ThreadMethod method then duly casts its parameter to its own type and uses it. All of this is pointless, as the method doesn’t need any parameters – it can just use “this” inside the method.
  • It’s very important to initialize lists with the right capacity. Again, this isn’t too bad as far as it goes – except that this micro-optimisation goes awry when he reads the TextBox.Lines property twice: once to work out the appropriate capacity and once to fill the list with initial data. Unfortunately the TextBox.Lines property has to take the existing text in the TextBox, split it (creating a bunch of substrings) and get the result into an array. This in turn means doing all the normal shenanigans associated with creating buffers which are bigger than you need, filling them, copying to a new buffer etc – exactly what we’re trying to avoid! This “optimization” will usually cost time instead of saving it. It could be easily fixed by just fetching the array in one statement, then using the same array for both the count and the list population. In fact, if you just pass the array into the List<T> constructor, it will perform the optimization for you – it can detect that it’s an ICollection<T> and use the Count property directly. Writing the simplest code actually ends up being optimal.
  • The above bullet point isn’t going to dominate the performance of that example though – there’s a potentially far worse effect due to the way the resulting “encrypted” string is broken up each time: using string concatenation in a loop. I guess we’d better hope there are no really long lines. If an author is going to give optimisation “tips” they need to be a lot more rigorous than this. Using string concatenation in a loop is probably the single best-known performance no-no in .NET. I was really shocked to see this in a book which is supposedly about making your code perform better. Now, it could be that string concatenation was used deliberately to slow things down – but in that case, why not highlight it? Drawing attention to intended optimizations gives the impression that the rest of the code is either optimized or has at least been written reasonably. If “bad” code is to be used for a specific purpose, that should be called out so that the reader won’t go onto use the same kind of code in their own production apps (which really shouldn’t be deliberately slowed down).

These aren’t the only issues I have with the code. Unicode is abused by “encrypting” text with no discussion of whether the strings he produces are valid or not (as opposed to the normal practice of only encrypting data after first converting it into binary; the encrypted binary might then be converted to text using base64 if you need to transmit the encrypted data as text). We could easily end up with strings containing surrogate high or low code points without the corresponding half in the appropriate place. When analyzing a bitmap he uses GetPixel and SetPixel for each pixel, rather than calling LockBits once and then accessing the image data in a much faster manner. (The code given does scale, but it’s not as fast as it could be. Using LockBits it would still scale, but the “per thread” work would be faster.) There are other, similar issues lurking in the text, but I’m sure you get the gist of the problem.

Conclusion

Believe it or not, there are things about this book that I actually like. It’s relatively thin, which has very tangible advantages when you’re carrying it around a lot. The sections explaining has to use Process Explorer and Task Manager to their best are useful, and the ideas of the examples are good – even though they basically cover the same ground several times. Unfortunately the bad points outweight the good far too heavily. To summarise them:

  • What I consider some of the absolute core elements of .NET multithreading (locking and monitors in particular) aren’t covered at all
  • Only the simple situation of an embarrassingly parallel algorithm is covered. In the real world developers will have to face real challenges where tasks don’t always split themselves up nicely into totally independent chunks. A reader who finishes this book assured that they are now threading gurus will face a nasty shock.
  • Server-side threading isn’t given much coverage at all, despite this being arguably the most likely environment for developers to encounter multithreading
  • The “story” element of the prose style is childish and patronising
  • The coding style, while a personal choice, makes me wince – and is particularly verbose for a book, where space is important
  • Many bad practices are encouraged, and there are plenty of important misunderstandings to trip up readers
  • The index has failed me (even when I’ve known that the subject is in the book) more times than it’s helped me

It’s a real pity. I was hoping this would be a book I could recommend to people as a precursor to reading Joe Duffy’s excellent Concurrent Programming on Windows. Instead, my current best advice is to read Joe Albahari’s threading tutorial. (I previously had a link to my own threading tutorial as well, but apparently this made people think I was fishing for more readers of that.) I’m sure there are good introductory threading books out there, but I’m afraid this isn’t one of them.

What’s in a name?

T.S. Eliot had the right idea when he wrote “The naming of cats”:

The Naming of Cats is a difficult matter,
It isn’t just one of your holiday games

When you notice a cat in profound meditation,
The reason, I tell you, is always the same:
His mind is engaged in a rapt contemplation
Of the thought, of the thought, of the thought of his name:
His ineffable effable
Effanineffable
Deep and inscrutable singular Name.

Okay, so developers may not contemplate their own names much, but I know I’ve certainly spent a significant amount of time recently trying to work out the right name for various types and methods.  It always feels like it’s just out of reach; tauntingly, tantalisingly close.

Recently I’ve been thinking a bit about what the goals might be in coming up with a good name. In particular, I seem to have been plagued with the naming problem more than usual in the last few weeks.

Operations on immutable types

A while ago I asked a question on Stack Overflow about naming a method which “adds” an item to an immutable collection. Of course, when I say “adds” I mean “returns a new collection whose contents is the old collection and the new item.” There’s a really wide range of answers (currently 38 of them) which mostly seem to fall into four categories:

  • Use Add because it’s idiomatic for .NET collections. Developers should know that the type is immutable and act accordingly.
  • Use Cons because that’s the term functional programming has used for this exact operation for decades.
  • Use a new method name (Plus being my favourite at the moment) which will be obvious to non-functional developers, but without being so familiar that it suggests mutability.
  • Use a constructor taking the old collection and the new item.

Part of the reasoning for Add being okay is that I originally posted the question purely about “an immutable collection” – e.g. a type which would have a name like ImmutableList<T>. I then revealed my true intention (which I should have done from the start) – to use this in MiniBench, where the “collection” would actually be a TestSuite. Everything in MiniBench is immutable (it’s partly an exploration in functional programming, as it seems to fit very nicely) but I don’t want to have to name every single type as Immutable[Whatever]. There’s the argument that a developer should know at least a little bit about any API they’re using, and the immutability aspect is one of the first things they should know. However, MiniBench is arguably an extreme case, because it’s designed for sharing test code with people who’ve never seen it before.

I’m pretty sure I’m going to go with Plus in the end:

  • It’s close enough to Add to be familiar
  • It’s different enough to Add to suggest that it’s not quite the same thing as adding to a normal collection
  • It sounds like it returns something – a statement which just calls Plus without using the result sounds like it’s wrong (and indeed it would be)
  • It’s meaningful to everyone
  • I have a precedent in the Joda Time API

Another option is to overload the + operator, but I’m not really sure I’m ready to do that just yet. It would certainly leave brief code, but is that really the most important thing?

Let’s look at a situation with some of the same issues…

LINQ operators

Work on MoreLINQ has progressed faster than expected, mostly because the project now has four members, and they’ve been expending quite a bit of energy on it. (I must do a proper consistency review at some point – in particular it would be nice to have the docs refer to the same concepts in the same way each time. I digress…)

Most of the discussion in the project hasn’t been about functionality – it’s been about naming. In fact, LINQ is particularly odd in this respect. If I had to guess at how the time has been spent (at least for the operators I’ve implemented) I’d go for:

  • 15% designing the behaviour
  • 20% writing the tests
  • 10% implementation
  • 5% writing the documentation (just XML docs)
  • 50% figuring out the best name

It really is that brutal – and for a lot of the operators we still haven’t got the “right” name yet, in my view. There’s generally too much we want to convey in a word or two. As an example, we’ve got an operator similar to the oft-implemented ForEach one, but which yields the input sequence back out again. Basically it takes an action, and for each element it calls the action and then yields the element. The use case is something like logging. We’ve gone through several names, such as Pipe, Tee, Via… and just this morning I asked a colleague who suggested Apply, just off the top of his head. It’s better than anything we’d previously thought of, but does it convey both the “apply an action” and “still yield the original sequence” aspects?

The old advice of “each method should only do one thing” is all very well, and it clearly helps to make naming simpler, but with situations like this one there are just naturally more concepts which you want to get across in the name.

Let’s stay on the LINQ topic, but stray a bit further from the well-trodden path…

The heart of Push LINQ: IDataProducer

I’ve probably bored most of you with Push LINQ by now, and I’m not actively developing it at the moment, but there’s still one aspect which I’m deeply uncomfortable with: the core interface. IDataProducer represents a stream of data which can be observed. Basically clients subscribe to events, and their event handlers will be called when data is “produced” and when the stream ends.

I know IDataProducer is an awful name – but so far I haven’t found anything better. IObservable? Ick. Overused and isn’t descriptive. IPushEnumerable? Sounds like the client can iterate over the data, which they can’t. The actual event names (DataProduced/EndOfData) are okay but there must be something better than IDataProducer. (Various options have been suggested in the past – none of them have been so obviously “right” as to stick in my head…)

This situation is slightly different to the previous ones, however, simply because it’s such a pivotal type. You would think that the more important the type, the more important the name would be – but in some ways the reverse is true. You see, Push LINQ isn’t a terribly “obvious” framework. I say that without shame – it’s great at what it does, but it takes a few mental leaps before you really grok it. You’re really going to have to read some documentation or examples before you write your own queries.

Given that constraint, it doesn’t matter too much what the interface is called – it’s going to be explained to you before you need it. It doesn’t need to be discoverable – whereas when you’re picking method names to pop up in Intellisense, you really want the developer to be able to guess its purpose even before they hover over it and check the documentation.

I haven’t given up on IDataProducer (and I hope to be moving Push LINQ into MoreLINQ, by the way – working out a better name is one of the blockers) but it doesn’t feel like quite as much of a problem.

Read-only or not read-only?

This final example came up at work, just yesterday – after I’d started writing this post. I wanted to refactor some code to emphasize which methods only use the read-only side of an interface. This was purely for the sake of readability – I wanted to make it easier to reason about which areas of the code modified an object and which didn’t. It’s a custom collection – the details don’t matter, but for the sake of discussion let’s call it House and pretend we’re modelling the various things which might be in a house. (This is Java, hence House rather than IHouse.)

I’m explicitly not doing this for safety – I don’t mind the fact that the reference could be cast to a mutable interface. The point is just to make it self-documenting that if a method only has a parameter in the non-mutating form, it’s not going to change the contents of the house.

So, we have two interfaces, like this:

public interface NameMePlease
{
    Color getDoorColor();
    int getWindowCount();

    // This already returned a read-only collection
    Set<Furniture> getFurniture();
}

public interface House extends NameMePlease
{
    void setDoorColor(Color doorColor);
    void setWindowCount(int windows);
    void addFurniture(Furniture item);
}

Obviously the challenge is to find a name for NameMePlease. One option is to use something like ImmutableHouse or ReadOnlyHouse – but the inheritance hierarchy makes liars of both of those names. How can it be a ReadOnlyHouse if there are methods in an implementation which change it? The interface should say what you can do with the type, rather than specifying what you can’t do – unless part of the contract of the interface is that the implementation will genunely prohibit changes.

Thinking of this “positive” aspect led me to ReadableHouse, which is what I’ve gone with for the moment. It states what you can do with it – read information. Again, this is a concept which Joda Time uses.

Another option is to make it just House, and change the mutable interface to MutableHouse or something similar. In this particular situation the refactoring involved would have been enormous. Simple to automate, but causing a huge check-in for relatively little benefit. Almost all uses are actually mutating ones. The consensus within the Google Java mailing list seems to be that this would have been the preferred option, all things being equal. One interesting data point was that although Joda Time uses ReadableInstant etc, the current proposals for the new date/time API which will be included in Java 7, designed by the author of Joda Time, don’t use this convention. Presumably the author found it didn’t work quite as well as he’d hoped, although I don’t have know of any specific problems.

Conclusion

You’ll probably be unsurprised to hear that I don’t have a recipe for coming up with good names. However, in thinking about naming I’ve at least worked out a few points to think about:

  • Context is important: how discoverable does this need to be? Is accuracy more important than brevity? Do you have any example uses (e.g. through tests) which can help to see whether the code feels right or not?
  • Think of your audience. How familiar will they be with the rest of the code you’re writing? Are they likely to have a background in other areas of computer science where you could steal terminology? Can you make your name consistent with other common frameworks they’re likely to use? The reverse is true too: are you reusing a familiar name for a different concept, which could confuse readers?
  • Work out the information the name is trying to convey. For types, this includes working out how it participates in inheritance. Is it trying to advertise capabilities or restrictions?
  • Is it possible to make correct code look correct, and buggy code look wrong? This is rarely feasible, but it’s one of the main attractions of “Plus” in the benchmark case. (I believe this is one of the main selling points of true Hungarian Notation for variable naming, by the way. I’m not generally a fan, but I like this aspect.)

I may expand this list over time…

I think it’s fitting to close with a quote from Phil Karlton:

There are only two hard things in Computer Science: cache invalidation and naming things.

Almost all of us have to handle naming things. Let’s hope most of us don’t have to mess with cache invalidation as well.

Benchmarking: designing an API with unusual goals

In a couple of recent posts I’ve written about a benchmarking framework and the results it produced for using for vs foreach in loops. I’m pleased with what I’ve done so far, but I don’t think I’ve gone far enough yet. In particular, while it’s good at testing multiple algorithms against a single input, it’s not good at trying several different inputs to demonstrate the complexity vs input size. I wanted to rethink the design at three levels – what the framework would be capable of, how developers would use it, and then the fine-grained level of what the API would look like in terms of types, methods etc. These may all sound quite similar on the face of it, but this project is somewhat different to a lot of other coding I’ve done, mostly because I want to lower the barrier to entry as far as humanly possible.

Before any of this is meaningful, however, I really needed an idea of the fundamental goal. Why was I writing yet another benchmarking framework anyway? While I normally cringe at mission statements because they’re so badly formulated and used, I figured this time it would be helpful.

Minibench makes it easy for developers to write and share tests to investigate and measure code performance.

The words in bold (or for the semantically inclined, the strong words) are the real meat of it. It’s quite scary that even within a single sentence there are seven key points to address. Some are quite simple, others cause grief. Now let’s look at each of the areas of design in turn.

Each element of the design should either clearly contribute to the mission statement or help in a non-functional way (e.g. make the project feasible in a reasonable timeframe, avoid legal issues etc). I’m aware that with the length of this post, it sounds like I’m engaging in "big upfront design" but I’d like to think that it’s at least informed by my recent attempt, and that the design criteria here are statements of intent rather than implementation commitments. (Aargh, buzzword bingo… please persevere!)

What can it do?

As we’ve already said, it’s got to be able to measure code performance. That’s a pretty vague definition, however, so I’m going to restrict it a bit – the design is as much about saying what isn’t included as what is.

  • Each test will take the form of a single piece of code which is executed many times by the framework. It will have an input and an expected output. (Operations with no natural output can return a constant; I’m not going to make any special allowance for them.)
  • The framework should take the tedium out of testing. In particular I don’t want to have to run it several times to get a reasonable number of iterations. I suspect it won’t be feasible to get the framework to guess appropriate inputs, but that would be lovely if possible.
  • Only wall time is measured. There are loads of different metrics which could be applied: CPU execution time, memory usage, IO usage, lock contention – all kinds of things. Wall time (i.e. actual time elapsed, as measured by a clock on the wall) is by far the simplest to understand and capture, and it’s the one most frequently cited in newsgroup and forum questions in my experience.
  • The benchmark is uninstrumented. I’m not going to start rewriting your code dynamically. Frankly this is for reasons of laziness. A really professional benchmarking system might take your IL and wrap it in a timing loop within a single method, somehow enforcing that the result of each iteration is used. I don’t believe that’s worth my time and energy, as well as quite possibly being beyond my capabilities.
  • As a result of the previous bullet, the piece of code to be run lots of times needs to be non-trivial. The reality is that it’ll end up being called as a delegate. This is pretty quick, but if you’re just testing "is adding two doubles faster or slower than adding two floats" then you’ll need to put a bit more work in (e.g. having a loop in your own code as well).
  • As well as the use case of "which of these algorithms performs the best with this input?" I want to support "how does the performance vary as a function of the input?" This should support multiple algorithms at the same time as multiple inputs.
  • The output should be flexible but easy to describe in code. For single-input tests simple text output is fine (although the exact figures to produce can be interesting); for multiple inputs against multiple tests a graph would often be ideal. If I don’t have the energy to write a graphing output I should at least support writing to CSV or TSV so that a spreadsheet or graphing tool can do the heavy lifting.
  • The output should be useful – it should make it easy to compare the performance of different algorithms and/or inputs. It’s clear from the previous post here that just including the scaled score doesn’t give an obvious meaning. Some careful wording in the output, as well as labeled columns, may be required. This is emphatically not a dig at anyone confused by the last post – any confusion was my own fault.

Okay, that doesn’t sound too unreasonable. The next area is much harder, in my view.

How does a developer use it?

Possibly the most important word in the mission statement is share. The reason I started this project at all is that I was fed up with spending ages writing timing loops for benchmarks which I’d then post on newsgroups or Stack Overflow. That means there are two (overlapping) categories of user:

  • A developer writing a test. This needs to be easy, but that’s an aspect of design that I’m reasonably familiar with. I’m not saying I’m good at it, but at least I have some prior experience.
  • A developer reading a newsgroup/forum post, and wanting to run the benchark for themselves. This distribution aspect is the hard bit – or at least the bit requiring imagination. I want the barrier to running the code to be really, really low. I suspect that there’ll be a "fear of the unknown" to start with which is hard to conquer, but if the framework becomes widely used I want the reader’s reaction to be: "Ah, there’s a MiniBench for this. I’m confident that I can download and run this code with almost no effort."

This second bullet is the one that my friend Douglas and I have been discussing over the weekend, in some ways playing a game of one-upmanship: "I can think of an idea which is even easier than yours." It’s a really fun game to play. Things we’ve thought about so far:

  • A web page which lets you upload a full program (without the framework) and spits out a URL which can be posted onto Stack Overflow etc. The user would then choose from the following formats:
    • Single .cs file containing the whole program – just compile and run. (This would also be shown on the download page.)
    • Test code only – for those whole already have the framework
    • Batch file – just run it to extract/build/run the C# code.
    • NAnt project file containing the C# code embedded in it – just run NAnt
    • MSBuild project file – ditto but with msbuild.
    • Zipped project – open the project to load the test in one file and the framework code in other (possibly separate) .cs files
    • Zipped solution – open to load two projects: the test code in one and the framework in the other
  • A web page which which lets you upload your results and browse the results of others

Nothing’s finalised here, but I like the general idea. I’ve managed (fairly easily) to write a "self-building" batch file, but I haven’t tried with NAnt/MSBuild yet. I can’t imagine it’s that hard – but then I’m not sure how much value there is either. What I do want to try to aim for is users running the tests properly, first time, without much effort. Again, looking back at the last post, I want to make it obvious to users if they’re running under a debugger, which is almost always the wrong thing to be doing. (I’m pretty sure there’s an API for this somewhere, and if there’s not I’m sure I can work out an evil way of detecting it anyway.)

The main thing is the ease of downloading and running the benchmark. I can’t see how it could be much easier than "follow link; choose format and download; run batch file" – unless the link itself was to the batch file, of course. (That would make it harder to use for people who wanted to get the source in a more normal format, of course.)

Going back to the point of view of the developer writing the test, I need to make sure it’s easy enough for me to use from home, work and on the train. That may mean a web page where I can just type in code, the input and expected output, and let it fill in the rest of the code for me. It may mean compiling a source file against a library from the command line. It may mean compiling a source file against the source code of the framework from the command line, with the framework code all in one file. It may mean building in Visual Studio. I’d like to make all of these cases as simple as possible – which is likely to make it simple for other developers as well. I’m not planning on optimising the experience when it comes to writing a benchmark on my mobile though – that might be a step too far!

What should the API look like?

When we get down to the nitty-gritty of types and methods, I think what I’ve got is a good starting point. There are still a few things to think about though:

  • We nearly have the functionality required for running a suite with different inputs already – the only problem is that we’re specifying the input (and expected output) in the constructor rather than as parameters to the RunTests method. I could change that… but then we lose the benefit of type inference when creating the suite. I haven’t resolved this to my satisfaction yet :(
  • The idea of having the suite automatically set up using attributed methods appeals, although we’d still need a Main method to create the suite and format the output. The suite creation can be simplified, but the chances of magically picking the most appropriate output are fairly slim. I suppose it could go for the "scale to best by number of iterations and show all columns" option by default… that still leaves the input and expected output, of course. I’m sure I’ll have something like this as an option, but I don’t know how far it will go.
  • The "configuration" side of it is expressed as a couple of constants at the moment. These control the minimum amount of time to run tests for before we believe we’ll be able to guess how many iterations we’ll need to get close to the target time, and the target time itself. These are currently set at 2 seconds and 30 seconds respectively – but when running tests just to check that you’ve got the right output format etc, that’s far too long. I suspect I should make a test suite have a configuration, and default to those constants but allow them to be specified on the command line as well, or explicitly in code.
  • Why do we need to set the expected output? In many cases you can be pretty confident that at least one of the test cases will be correct – so it’s probably simpler just to run each test once and check that the results are the same for all of them, and take that as the expected output. If you don’t have to specify the expected output, it becomes easier to specify a sequence of inputs to test.
  • Currently BenchmarkResult is nongeneric. This makes things simpler internally – but should a result know the input that it was derived from? Or should the ResultSuite (which is also nongeneric) know the input that has been applied to all its functions? The information will certainly need to be somewhere so that it can be output appropriately in the multiple input case.

My main points of design focus around three areas:

  • Is it easy to pick up? The more flexible it is, with lots of options, the more daunting it may seem.
  • Is it flexible enough to be useful in a variety of situations? I don’t know what users will want to benchmark – and I don’t build the right tool, it will be worthless to them.
  • Is the resulting test code easy and brief enough to include in a forum post, with a link to the full program? Will readers understand it?

As you can see, these are aimed at three slightly different people: the first time test writer, the veteran test writer, and the first time test reader. Getting the balance between the three is tricky.

What’s next?

I haven’t started rewriting the framework yet, but will probably do so soon. This time I hope to do it in a rather more test-driven way, although of course the timing-specific elements will be tricky unless I start using a programmatic clock etc. I’d really like comments around this whole process:

  • Is this worth doing?
  • Am I asking the right questions?
  • Are my answers so far headed in the right direction?
  • What else haven’t I thought of?