All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see https://toggl.com/programming-princess for the full original

Reasons for voting on questions and answers

I’ve recently been involved in a few discussions around voting on Stack Overflow, and I think my own "policy" around it may be different to that of others. I thought it would be worth sharing why I personally vote items up or down, and hear your thoughts too. This blog may not be the ideal venue for such a post, but until such time as we have a real "meta" site for Stack Overflow (such as Stack Overflow Overflow) I can’t think of anywhere better to write about it. Readers who are only interested in coding should move on; I promise not to include anything about code in the rest of this post.

I’m going to assume that anyone who’s read this far is at least somewhat familiar with the logistics of Stack Overflow – in particular, how one votes and the effects on reputation. I’ll use the word "post" here to mean either a question or an answer.

I’d like to stress that this is in no way meant to be seen as an "official" voting guide – just how I happen to think.

Why vote?

There are two "audiences" for a vote in my view: the author of the post and the community who is reading the post and looking at the vote tally. The author can tell what votes they’ve received using the reputation tab of their "recent activity" page, whereas the readers can only tell what the overall tally is. Obviously the author also receives or loses reputation, too. This means the effect on the author and the audience are slightly different.

For the author, the immediate reward or punishment aspect may sound like the most important aspect: but I’d argue that for many users (particularly those with high reputation) the reputation for a single vote isn’t as important as the effect on the vote tally and what it communicates about your post. There’s usually a positive feedback effect on Stack Overflow: if one answer has a couple of votes and another has none, then the higher voted one is likely to get read more and thus garner more votes. The opposite can happen: an answer with a negative score will sometimes receive "sympathy" votes from users who think, "This answer isn’t brilliant, but it’s not bad enough to deserve downvotes."

For me, the important point about a downvote is that it may indicate I’ve got something wrong in an answer. I may have missed the point of the question in the first place, or simply provided a technically incorrect or unhelpful answer. I want to know about that, so I can fix my answer or delete it if I can’t actually provide any more useful information than is contained in other answers.

For the reader, the information communicated by the score can be as simple as "this question is interesting/this answer is helpful" vs "this is a poor question/this answer is harmful". I would hope that non-regular visitors will quickly get the idea that the highest voted answers are likely to be the best ones, and that answers with a negative score really shouldn’t be trusted. It doesn’t always work that way, but it’s a reasonable rule of thumb.

So, how do I vote?

How I react to questions

I generally upvote a question if it’s been well written and I think the problem is sufficiently common that it’s going to help someone else searching for it. I’ll also upvote it if it’s particularly interesting, even if it’s not a very general problem. I suspect I should upvote questions more often – and I also suspect that’s true of many users.

I very rarely downvote a question, however. If the question is inappropriate, I’ll usually vote to close it. If it’s badly written but intelligible, I’ll edit it. If it isn’t precise enough (just lacks information) I’ll ask for more information in a comment. I don’t see much use in downvoting. Now there are various users who don’t support the idea of closing a question at all, of course – and if closing weren’t an option then I would downvote instead. However, I personally support question closing (when appropriate, of course – I’m not saying every closed question deserved it).

How I react to answers

I will generally upvote an answer if I feel it’s correct and helpful. If there are multiple posts which effectively answer a question, I will usually upvote the best one, but others which provide other bits of relevant information may get a vote too. Again, I probably don’t upvote as often as I should.

I downvote if I see an answer as actively unhelpful: this is usually if it’s technically inaccurate, or suggests something which I think is a really bad idea (such as string concatenation in a loop without a small, known limit). I don’t downvote an answer just for being "not as helpful as it could be" or "not as helpful as another answer". I believe that behaviour discourages people from contributing in the first place, and an extra answer has relatively little cost associated with it. If it contains no information which isn’t present in another answer then I’d prefer it to be deleted, but I’d leave that suggestion as a comment rather than a downvote.

Speaking of comments, I practically always leave a comment when I downvote. A downvote means I believe something is really wrong with the answer, and it should be fixed: leaving it as it is makes the world a worse place than if it didn’t exist. A downvote without a comment is fairly pointless – it doesn’t help the poster to fix the answer, because they don’t know what they did wrong in the first place. I find it intensely frustrating when someone downvotes one of my answers without giving any reason: apart from anything else, that downvote could be made on the basis of a mistaken belief, but without that belief being expressed I have no way of correcting it. I always take another look at an answer which has been downvoted, but I’m much more likely to edit it if I’m given a specific reason.

Note that the reputation loss due to a downvote is almost insignificant – particularly if it’s early in the day (i.e. before very active users hit the 200 cap) – the idea that I’ve written something unhelpful is far more disconcerting to me than the loss of a tiny amount of rep.

I can see why users aren’t prompted for a comment on a downvote – it would be very easy to just type garbage, and that would be worse than no comment at all. It would also require the comment to be anonymised in order to keep the vote itself anonymous. Even so, I’d ask courteous readers to add a comment when you downvote one of my answers: I promise not to "retaliate" with a spate of downvotes, but I’d really like to be able to fix the answer!

In terms of editing, I will often edit an answer for formatting reasons or to correct a small typo, but I edit answers (from other people) less often than I edit questions.

How about you?

Enough about me – how do you vote? If you downvote without comments, what effect are you trying to achieve? When would you downvote a question rather than editing it, voting to close or leaving a comment? How do you react to downvotes to your own posts?

A different approach to inappropriate defaults

I’ve had a couple of bug reports about my Protocol Buffers port – both nicely detailed, and one including a patch to fix it. (It’s only due to my lack of timeliness in actually submitting the change that the second bug report occurred. Oops.)

The bug was in text formatting (although it also affected parsing). I was using the default ToString behaviour for numbers, which meant that floats and doubles were being formatted as "50,15" in Germany instead of "50.15". The unit tests caught this, but only if you ran them on a machine with an appropriate default culture.

Aaargh. I’ve been struggling with a similar problem in a library I can’t change, which uses the system default time zone for various calculations in Java. When you’re running server code, the default time zone is almost never the one you want to use, and it certainly isn’t in my case.

A similar problem is Java’s decision to use the system default encoding in all kinds of bizarre places – FileReader doesn’t even let you specify the encoding, which makes it almost entirely useless in my view.

So I’ve been wondering how we could fix this and problems like it. One option is to completely remove the defaults. If you always had to pass in a CultureInfo/Locale, TimeZoneInfo/TimeZone, Encoding/Charset when you call any method which might be culturally sensitive.

Making life easier (in .NET)

It strikes me that .NET has a useful abstraction here: the assembly as the unit of deployment. (Java’s closest equivalent is probably a jar file, which probably gets messier.)

Within one assembly, I suspect in many cases you always want to make the same decision. For example, in protocol buffers I would like to use the invariant culture all the time. It would be nice if I could say that, and then get the right behaviour by default. Here are the options I’d like to be able to apply (for each of culture, time zone and character encoding – there may be others):

  • Use a culture-neutral default (the invariant culture, UTF-8, UTC)
  • Use a specific set of values (e.g. en-GB, Windows-1252, "Europe/London")
  • Use the system default values
  • Use whatever the calling assembly is using

Of course you should still have the option of specifying overrides on a per call basis, but I think this might be a way forward.

Thoughts? I realise it’s almost certainly too late for this to actually be implemented now, but would it have been a good idea? Or is it just an alternative source of confusion?

Language proliferation

I’ve always been aware that .NET supports multiple languages (obviously) and that Microsoft has been experimenting with this to some extent. It’s only recently struck me just to what extent this is the case though.

Here’s a list – almost certainly incomplete – of .NET languages from Microsoft alone.

Some of these are research languages which are more important for the ideas they’ve contributed to more mainstream ones at a later date than for anything else – but there’s still a lot of effort represented in the list.

In addition, there are third party languages targeting .NET, such as Boo, IronScheme and Scala. (Wikipedia lists loads of them.)

Now, think back to the time before .NET. Was Microsoft actively experimenting with languages back then? Plenty of people were trying things against the JVM, but Sun was pretty much absent from that party. .NET seems to be a "missing ingredient" that has allowed smart folk at Microsoft to let their imaginations loose in ways which they couldn’t previously. (Of course, not everyone in the language business at MS started there: Jim Hugunin was hired by Microsoft precisely because of his work on IronPython.)

I wonder how long this will continue.

Tower of Babel, or land of polyglots?

What does this mean for the average developer? Currently, if you’re writing a non-web application in .NET, you really only need to know a single language – and any of them will do. (Plus potentially SQL of course…) Compare this with web developers who have to be intimately familiar with HTML, CSS and JavaScript – and the differences between various implementations.

How long will it be before backend developers are expected to know a dynamic language, a static OO language and a functional language? Does the benefit of mixing several languages in a project worth the impedance mismatch and the increased skillset requirements? I’m not going to make any predictions on that front – I can certainly see the benefits of each of these approaches in certain situations. They’ve been designed to play well together, but there are bound to be limitations and oddities: times when you need to change how you write your F# so that it’s easily callable from C#, for example.

Whether or not you learn multiple languages to a professional level is one thing, but becoming familiar with them is a different matter. In the course of co-authoring Functional Programming for the Real World (where "co-author" is a bit of a stretch title – I’ve played more of an editorial role really, with the added bonus of picking on Tomas whenever I felt he was perhaps a little harsh towards C#) I’ve learned to appreciate many of F#’s qualities, but I don’t really know the language. If someone asked me to write a complete application in it (rather than just a toy experiment) I’d be reaching for books every other minute. I hope I’ll learn more over the course of time, but I doubt that I’ll ever be sufficiently experienced in it to put it on my CV. The same goes for IronPython, although I’m considerably more likely to need Python at work than I am F#. (Python is one of the three "approved" languages at Google, along with Java and C++.) None of this means that time spent in these languages is wasted: I’ll be able to apply a lot of what I’ve learned about F# to my C# coding, even if it will make me pine for things like pattern matching and asynchronous workflows periodically.

I think it’s pretty much a given that these days we all need to bring a wide range of technologies to bear in most jobs. While it used to be just about feasible in the .NET 1.1 days to have a pretty good grasp of all the major aspects (ASP.NET for sites and web services, ADO.NET, WinForms, Windows services, class libraries, interop) it’s just impossible these days. We learn something new when we need to – but usually against the background of a familiar language. How well would we cope if we had to learn whole new languages (to the level of being able to use them for production code) as often as we have to learn new libraries?

This worries me a little. I’m pleased to see that C# 4 is a much smaller change than the previous versions were. Admittedly I’d rather have had immutability support than dynamic, but that’s just me… and that’s the problem, too. While I worry about our ability to actually learn everything that’s becoming available, it’s all good stuff. Can there be "too much of a good thing"?

What I really don’t want to see is developers having to know multiple languages, and everyone knowing them poorly. I’m a big believer in having a thorough understanding of your language, so that even if everything else is new, you can rely on your understanding of that aspect of your code. It would be a shame if the pressure of knowing many languages turned many of us into cargo cult programmers. The utopia would be for us all to turn into language renaissance developers. I suspect the reality will be somewhere between the two.

Still, as long as I get to keep helping authors write about languages I know almost nothing about, I’m sure I’ll be happy…

RFC: C# in Depth 2nd edition, proposed changes and additions

As I’ve mentioned in passing before now, I’ve started working on the 2nd edition of C# in Depth, to roughly coincide with the release of C# 4 and Visual Studio 2010. So far I’ve just been thinking about what should be added and what should be changed or removed. That’s what I’d like to share today, and I’m really eager for feedback. In particular, I’d like to think of three audiences:

  • Developers (possibly hobbyists) who have read one introductory C# book (Head First C#, Microsoft Visual C# 2008 Step by Step or something similar) and are now looking to follow it up. I’m thinking of people with less than a year of full-time development in C#, and not a lot of experience in similar languages. Is C# in Depth too “hard” for this audience? If so, should I try to make it more accessible for them, or is the material inherently too advanced? What might attract such a person to the book in terms of the table of contents?
  • Reasonably experienced developers who know C# 2 and 3 already, but haven’t read C# in Depth. They may have heard of it before (I’ve been very pleased in terms of word-of-mouth for the first edition, so thanks to everyone who’s contributed to that!) but not had much real reason to buy another book when they already have something like C# 3.0 in a Nutshell. Now that C# 4 is around, they may be interested in buying something which will explain the new features, and they might as well get a different book rather than just a fresh edition of something they already own.
  • Existing readers of C# in Depth. If you already own the first edition, what might persuade you to buy the second edition? How much “new” stuff is required? Would you rather get “more bang for the buck” from a significantly thicker book, or does the relative slimness of C# in Depth hold real appeal? (I know I’m a fan of slim books, but I don’t know quite how important it is. The second edition certainly will be thicker than the first as I’m unlikely to remove much, but I’m sure there’s scope for varying just how much we add.)

Here’s a draft table of contents – only down to headings within a chapter, but with some notes. Changes and notes are highlighted in blue.

Part 1: Preparing for the journey

Chapter 1: The changing face of C# development
  • Evolution in action: examples of code change
    I won’t be able to include the C# 4 features by just evolving the existing code… at least, not the big ones. Options:
    • Stick with the existing example for 1 to 2 and 2 to 3, but change to a different problem for 4
    • Try to work out a different example to show all of 1 to 2, 2 to 3 and 3 to 4
    • Stick with the existing example, but take it in a different direction for 4, rather than just rewriting the existing code
  • A brief history of C# (and related technologies)
    Update this with what’s been happening since 2008 and possibly slim down what’s already there
  • The .NET platform
    Add the new versions, including .NET 3.5 SP1, and introduce the DLR in the terminology part.
  • Fully functional code in snippet form
    Nothing new to add about Snippy, but there may be another tool to talk about as well.
  • Summary
Chapter 2: Core foundations: building on C# 1
  • Delegates
  • Type system characteristics
  • Value types and reference types
  • Beyond C# 1: new features on a solid base
    Introduce interface/delegate variance and the dynamic type.
  • Summary

Part 2: Solving the issues of C# 1

Chapter 3: Parameterized typing with generics

  • Why generics are necessary
  • Simple generics for everyday use
  • Beyond the basics
    Add a tip and example about using generic static methods in a non-generic type to take advantage of type inference when creating instances of generic types.
  • Advanced generics
  • Generic collection classes
    It would probably make sense to include HashSet<T> here, and possibly any new collection classes in .NET 4.0. Version warnings would be given, of course!
  • Limitations of generics in C# and other languages
    Refer to part 4 in terms of variance, and possibly trim down the coverage of alternative approaches.
    Add an example of Marc Gravell’s work for generic operators, or at least refer to it. Possibly mention the idea of
    static interfaces. Both give food for thought, but aren’t really part of C#.
  • Summary
Chapter 4: Saying nothing with nullable types
  • What do you do when you just don’t have a value?
  • System.Nullable<T> and System.Nullable
  • C# 2’s syntactic sugar for nullable types
  • Novel uses of nullable types
    Possibly include Marc Gravell’s teaser about calling all the Object methods on a null value.
  • Summary
    Make sure the summary is typeset as 4.5 instead of as part of 4.4. Oops!
Chapter 5: Fast-tracked delegates
  • Saying goodbye to awkward delegate syntax
  • Method group conversions
  • Covariance and contravariance
    Mention generic delegate variance in C# 4, referring to part 4.
  • Inline delegate actions with anonymous methods
  • Capturing variables in anonymous methods
  • Summary
Chapter 6: Implementing iterators the easy way
  • C# 1: the pain of handwritten iterators
  • C# 2: simple iterators with yield statements
  • Iteration in the real world
    Renamed section, but keep first example – possibly make it shorter.
    Add iteration over lines in a file.
    Add generation iterator, referring to LINQ.
  • Pseudo-synchronous code with the Concurrency and Coordination Runtime
    Fix sample code and explanation, with full article on the web
  • Summary
Chapter 7: Concluding C# 2: the final features
  • Partial types
  • Static classes
  • Separate getter/setter property access
  • Namespace aliases
  • Pragma directives
  • Fixed-size buffers in unsafe code

Part 3: New title TBD

Chapter 8: Cutting fluff with a smart compiler
  • Automatically implemented properties
  • Implicit typing of local variables
  • Simplified initialization
  • Implicitly typed arrays
  • Anonymous types
  • Summary
Chapter 9: Lambda expressions and expression trees
  • Lambda expressions as delegates
  • Simple examples using List<T> and events
  • Expression trees
    Expand for new expression support in .NET 4.0 (clearly labeled as such)
  • Changes to type inference and overload resolution
    Check if/how this has changed in C# 4.0 spec
  • Summary

 

Chapter 10: Extension methods
  • Life before extension methods
  • Extension method syntax
  • Extension methods in .NET 3.5
  • Usage idea and guidelines
    Update advice here and give some more examples.
  • Summary
Chapter 11: Query expressions and LINQ to Objects

Look at changing how the diagrams are formatted in this chapter.

  • Introducing LINQ
  • Simple beginnings: selecting elements
  • Filtering and ordering a sequence
  • Let clauses and transparent identifiers
  • Joins
  • Groupings and continuations
  • New section: extending LINQ to Objects
    • Advice for writing your own extension methods
    • Plug for MoreLINQ and other libraries
  • Summary
Chapter 12: LINQ beyond collections
  • LINQ to SQL
    Despite the rumours of its doom, I think this is the best place to start.
  • Translations using IQueryable and IQueryProvider
  • LINQ to DataSet
  • LINQ to XML
  • LINQ to Entities
    Again, not much depth – still a whirlwind tour.
  • Third-party LINQ (renamed from LINQ beyond .NET 3.5)
    • Update state of play with third-party providers
    • Example of PushLINQ – still using simple delegates, but against different interface
    • Remove Parallel Extensions – moved to part 4
  • Summary
Chapter 13: Removed (new chapter at end of part 4)

Part 4: C# 4 (full title TBD)

Chapter 13: Dynamic binding in a static language
  • Introduction to the DLR, IronPython, IronRuby
  • Calling dynamically – the dynamic keyword
  • Reacting dynamically – implementing IDynamicObject
  • Applications for dynamic code
  • Summary
Chapter 14: More minor tweaks
  • Named and optional arguments
  • Variance of interfaces and generic delegates
  • Simplifying COM interoperability
  • Summary
Chapter 15: Major new features of .NET 4.0

Even though these don’t affect the language directly, they change the “shape” of code, the way we think about problems. I’m hoping there will be more…

  • Bullet-proofing your code with Code Contracts
  • Simplifying concurrency with Parallel Extensions
  • F# (?)
  • Summary
Chapter 16: Whither now?
  • Educated guesses about C# 5
  • The Renaissance developer
  • Until we meet again
Appendix: LINQ standard query operators

So, how does that sound? What else needs changing from the first edition? Are there any particular sections which would benefit from a neat example? Have I missed anything important that you’d want to see in part 4?

Right now I’ve got a lot of flexibility – so please, I’d much rather hear ideas now than later.

 

Go on, ask me anything

This afternoon, I found a comment which had been trapped in the spam bin for this blog. It was from Andrew Rimmer, in reply to my “micro-celebrity” post, pointing me at http://askjonskeet.com

The world has officially become extremely silly. The surprising thing is, it’s actually useful – at least for me. A number of times I’ve wanted to find my old answers to questions, so I can either just refer to them in a new answer or mark the new question as a dupe. You might have thought that a simple search such as

exceptions site:stackoverflow.com “jon skeet”

would suffice – but that finds what other people have said about exceptions in the same questions that I’ve been active in, and it also picks up any questions which happened to get one of the FinalBuilder adverts when the spider fetched them. The equivalent search on AskJonSkeet.com gets good results.

I’ve no idea how useful it will be for anyone else, but personally I love it. Ego? What ego?

Side note to self, puncturing ego slightly: don’t blog on the tube. It’s way too easy to miss your stop…

Buffering vs streaming benchmark: first results

My poor laptop’s had a busy weekend. It’s run 72 tests, rebooting between each test. Most of these tests have kept both the CPU and disk busy for a lot of the time. I expect to update this blog post with more numbers – and possibly more strategies – as time goes on, but I wanted to get the numbers out as quickly as possible. Graphs should be coming when I’ve got a decent network to experiment with Google graphing.

Source code and setup

While I don’t really expect anyone else to sacrifice their computer for the best part of 24 hours, it doesn’t take very much effort to get the tests running. This zip file contains the source code for three programs, a batch file to generate the test data, and a command file.

The CreateFiles executable creates input files based on its command line arguments. (This is run multiple times by the batch file.) EncryptFiles “encrypts” (I use the term loosely) all the files in a given directory, writing the results to another directory. The class design of EncryptFiles is a little bit funny, but it’s really tailored for these tests. It should be easy to write other strategies in the same way too.

ExecuteAndReboot is used to automate the testing. Basically it reads a command file (from a hard-coded directory; simplicity trumps flexibility here). If the file isn’t empty, it waits five minutes (so that after a reboot the computer has time to settle down after loading all its services) and runs the command specified in the first line of the file, appending the output to a results file. When the command has finished executing, ExecuteAndReboot rewrites the command file, removing the first line, and schedules a reboot. The idea is to drop ExecuteAndReboot into a Startup folder, set Windows to log in automatically, reboot once and then just let it go. If you want to use the computer, just wait until it’s in the “waiting five minutes” bit at the start of the cycle and kill ExecuteAndReboot. Next time you reboot it will start testing again.

That’s the general principle – if anyone does want to run the tests on their mail and needs a bit of help setting it up, just let me know.

Test environment and workload

I expect the details of the execution environment to affect the outcome significantly. For the sake of the record then, my laptop specs are:

Dell Inspiron 1720 (unlikely to be relevant, but…)
Intel Core 2 Duo T7250 2.0GHz
Vista Home Premium 32 bit
3 GB main memory, 32K L1 cache, 2048K L2 cache
2 disks, both 120GB (Fujitsu MHY2120BH, 5400RPM, 8MB cache)

Even though there are two drives in the system, the tests only ever read from and wrote to a single disk.

I created four sets of input data; I’ve presented the results for each one separately below, and included the description of the files along with the results. Basically in each case there’s a set of files where each file is the same size, consisting of a fixed number of fixed-length lines.

The “encryption” that is performed in each case (on the encoded version of the line) is just to XOR each byte with a value. This is repeated a specified number of times, and the XOR value on each pass is the current index, i.e. it first XORs with 0, then 1, then 2 etc. I’ve used work levels of 0 (the loop never runs), 100 and 1000. These were picked somewhat arbitrarily, but they give interesting results. At 0 the process is clearly IO-bound. At 1000 it’s clearly CPU-bound. At 100 it’s not pegging the CPU, but there’s significant activity. I may try 200 as well at some point.

For each scenario I tried using 1, 2 and 3 threads (and 4 for the big tests; I’ll go back and repeat earlier ones with 4 threads soon). The two strategies used are “streaming”: read a line, encrypt, write the line, repeat and “buffering”: read a whole file, encrypt it all, write it all out. After a thread has started, it needs no shared data at all – it’s given the complete list of files to encrypt at the very start.

Results

For every table, the number of threads is shown by the leftmost column, and the work level is shown by the topmost row. The cells are of the form “x/y” where “x” is the time taken for the streaming strategy, and “y” is the time taken for the buffering strategy. All times are in seconds. For each column, I’ve highlighted the optimal test.

Small tests

There are 10000 test files. Each is 100 lines long, with 100 characters per line.

Threads/Work 0 100 1000
1 47/90 94/79 163/158
2 95/112 113/157 129/146
3 179/128 130/144 152/160
4 144/139 163/193 156/204

Medium tests (short lines)

There are 100 test files. Each is 500,000 lines long, with 20 characters per line.

Threads/Work 0 100 1000
1 138/157 219/267 1125/1205
2 196/238 259/263 604/691
3 292/250 312/236 624/676
4 302/291 321/281 602/666

Medium tests (long lines)

There are 100 test files. Each is 100 lines long, with 100,000 characters per line.

Threads/Work 0 100 1000
1 203/213 249/286 1027/1107
2 283/264 311/296 602/696
3 319/284 314/278 608/618
4 360/300 336/297 607/598

Big tests

There are 20 files. Each file is 100,000 lines long, with 1000 characters per line.

Threads/Work 0 100 1000
1 190/246 349/441 2047/2165
2 365/375 392/473 1097/1387
3 456/379 484/399 1161/1296
4 517/442 521/431 1113/1214

Conclusions

I’m loathe to draw too many conclusions so far. After all, we’ve only got data for a single test environment. It would be interesting to see what would happen on a quad core machine. However, I think a few things can be said with reasonable confidence:

  • Sometimes the streaming strategy wins, sometimes the buffering strategy wins. The difference can be quite significant either way. For a given input and workload, the streaming solution almost always wins on my machine.
  • In ideal scenario, the bottlenecked resource should be busy for as much of the time as possible. For example, in a high-CPU task it makes little sense to have idle cores when you’ve already loaded data. Likewise in a disk-bound task we should always be fetching (or writing) data even if all our cores are momentarily busy. Personally I think this is the operating system’s pre-fetch systems’s job. We could do it ourselves, but it’s nicer if we don’t have to.
  • If we’re disk-bound, it makes sense to read (or write) one file at a time, to avoid seeking. This is obvious from the results – for work levels of 0 and 100, a single-thread solution is consistently best (and streaming usually works better buffering). Based on this assumption, I suspect a cleaner solution would be to have a single thread reading largish chunks (but not the whole file) and feeding the other threads – or to use async IO. However, this all gets very complicated. One nice aspect of both of the current solutions is that they’re really simple to implement safely. Reading line-by-line in an async manner is a pain. I’d quite like to write an async “execute for every line” helper at some time, but not just yet.
  • If we’re CPU bound, the buffering solution’s sweet spot is 3 threads (remember we’re using 2 cores) and the streaming solution works better with just 2 threads – introducing more threads will make both the IO and context switching less efficient.
  • The streaming strategy always uses less application memory than the buffering strategy. When running the buffering strategy against the “big” data set with 4 threads, the memory varied between 800MB and 1.8GB (as measured by the “Total bytes in all heaps” performance counter), vs around 165KB with the streaming strategy. (Obviously the process itself took more than 165KB, but that was the memory used in “normal” managed heap memory.) Using more memory to improve performance is a valid technique, but I’d say it’s only worth it when the improvement is very significant. Additionally, the streaming technique instantly scales to huge data files. While they’re not in the current requirements, it doesn’t hurt to get that ability for free.

Next steps…

I’ve included quite a few “I’m going to try to […]” points in this post. Just for reference, my plans are:

  • Turn off my virus checker! If this makes a significant difference, I’ll rerun all the current tests
  • Try to visualise the results with Google graphs – although I think it may get cluttered
  • Rerun big/buffering/1000 test – odd results
  • Rerun tests with a work level of 200 and keep an eye on CPU usage – I’d like to spot where we tip between CPU and IO being the bottleneck
  • Try to optimize the Stream/StreamReader being used – we should be able to skip the FileStream buffer completely, and specifying FileOptions.SequentialScan should give more hints to Windows about how we’re intending to read the data. I’ll try a few different buffer sizes, probably 4K, 8K and 64K, just to see what happens.

This is a really interesting little example problem, because it sounds so simple (and it is simple in terms of the implementation) but it has a lot of environment variables. Of course it would be nicer if we didn’t have to reboot the machine between test runs in order to get a fair result, but never mind…

Benchmarking IO: buffering vs streaming

I mentioned in my recent book review that I was concerned about a recommendation to load all of the data from an input file before processing all of it. This seems to me to be a bad idea in an age where Windows prefetch will anticipate what data you need next, etc – allowing you to process efficiently in a streaming fashion.

However, without any benchmarks I’m just guessing. I’d like to set up a benchmark to test this – it’s an interesting problem which I suspect has lots of nuances. This isn’t about trying to prove the book wrong – it’s about investigating a problem which sounds relatively simple, but could well not be. I wouldn’t be at all surprised to see that in some cases the streaming solution is faster, and in other cases the buffered solution is faster.

The Task

The situation presented is like this:

  • We have a bunch of input files, either locally or on the network (I’m probably just going to test locally for now)
  • Each file is less than 100MB
  • We need to encrypt each line of text in each input file, writing it to a corresponding output file

The method suggested in the book is for each thread to:

  1. Load a file into a List<string>
  2. Encrypt every line (replacing it in the list)
  3. Save to a new file

My alternative option is:

  1. Open a TextReader and a TextWriter for the input/output
  2. Repeatedly read a line, encrypt, write the encrypted line until we’ve exhausted the input file
  3. Close both the reader and the writer

These are the two implementations I want to test. I strongly suspect that the optimal solution would involve async IO, but doing an async version of ReadLine is a real pain for various reasons. I’m going to keep it simple – using plain threading, no TPL etc.

I haven’t written any code yet. This is where you come in – not to write the code for me, but to make sure I test in a useful way.

Environmental variations

My plan of attack is to first write a small program to generate the input files. These will just be random text files, and the program will have a few command line parameters:

  • Directory to put files under (one per test variation, basically)
  • Number of files to create
  • Number of lines per file
  • Number of characters per line

I’ll probably test a high and a low number for each of the last three parameters, possibly omitting a few variations for practical reasons.

In an ideal world I’d test on several different computers, locally and networked, but that just isn’t practical. In particular I’d be interested to see how much difference an SSD (low seek time) makes to this test. I’ll be using my normal laptop, which is a dual core Core Duo with two normal laptop disks. I may well try using different drives for reading and writing to see how much difference that makes.

Benchmarking

The benchmark program will also have a few command line parameters:

  • Directory to read files from
  • Directory to write files to
  • Number of threads to use (in some cases I suspect that more threads than cores will be useful, to avoid cores idling while data is read for a blocking thread)
  • Strategy to use (buffered or streaming)
  • Encryption work level

The first three parameters here are pretty self-explanatory, but the encryption work level isn’t. Basically I want to be able to vary the difficulty of the task, which will vary whether it ends up being CPU-bound or IO-bound (I expect). So, for a particular line I will:

  • Convert to binary (using Encoding.ASCII – I’ll generate just ASCII files)
  • Encrypt the binary data
  • Encrypt the encrypted binary data
  • Encrypt the encrypted encrypted […] etc until we’ve hit the number given by the encryption work level
  • Base64 encode the result – this will be the output line

So with an encryption work level of 1 I’ll just encrypt once. With a work level of 2 I’ll encrypt twice, etc. This is purely for the sake of giving the computer something to do. I’ll use AES unless anyone has a better suggestion. (Another option would be to just use an XOR or something else incredibly simple.) The key/IV will be fixed for all tests, just in case that has a bearing on anything.

The benchmarking program is going to be as simple as I can possibly make it:

  • Start a stopwatch
  • Read the names of all the files in the directory
  • Create a list of files for each thread to encrypt
  • Create and start the threads
  • Use Thread.Join on all the threads
  • Stop the stopwatch and report the time taken

No rendezvous required at all, which certainly simplifies things. By creating the work list before the thread, I don’t need to worry about memory model issues. It should all just be fine.

In the absence of a better way of emptying all the file read caches (at the Windows and disk levels) I plan to reboot my computer between test runs (which makes it pretty expensive in terms of time spent – hence omitting some variations). I wasn’t planning on shutting services etc down: I really hope that Vista won’t do anything silly like trying to index the disk while I’ve got a heavy load going. Obviously I won’t run any other applications at the same time.

If anyone has any suggested changes, I’d be very glad to hear them. Have I missed anything? Should I run a test where the file sizes vary? Is there a better way of flushing all caches than rebooting?

I don’t know exactly when I’m going to find time to do all of this, but I’ll get there eventually :)

Breaking Liskov

Very recently, Barbara Liskov won the Turing award, which makes it a highly appropriate time to ponder when it’s reasonable to ignore her most famous piece of work, the Liskov Substitution (or Substitutability) Principle. This is not idle speculation: I’ve had a feature request for MiscUtil. The request makes sense, simplifies the code, and is good all round – but it breaks substitutability and documented APIs.

The substitutability principle is in some ways just common sense. It says (in paraphrase) that if your code works for some base type T, it should be able to work with subtype of T, S. If it doesn’t, S is breaking substitutability. This principle is at the heart of inheritance and polymorphism – I should be able to use a Stream without knowing the details of what its underlying storage is, for example.

Liskov’s formulation is:

Let q(x) be a property provable about objects x of type T. Then q(y) should be true for objects y of type S where S is a subtype of T.

So, that’s the rule. Sounds like a good idea, right?

Breaking BinaryReader’s contract

My case in point is EndianBinaryReader (and EndianBinaryWriter, but the arguments will all be the same – it’s better to focus on a single type). This is simply an equivalent to System.IO.BinaryReader, but it lets you specify the endianness to use when converting values.

Currently, EndianBinaryReader is a completely separate class to BinaryReader. They have no inheritance relationship. However, as it happens, BinaryReader isn’t sealed, and all of the appropriate methods are virtual. So, can we make EndianBinaryReader derive from BinaryReader and use it as a drop-in replacement? Well… that’s where the trouble starts.

There’s no difficulty technically in doing it. The implementation is fairly straightforward – indeed, it means we can drop a bunch of methods from EndianBinaryReader and let BinaryReader handle it instead. (This is particularly handy for text, which is fiddly to get right.) I currently have the code in another branch, and it works fine.

And I would have gotten away with it if it weren’t for that pesky inheritance…

The problem is whether or not it’s the right thing to do. To start with, it breaks Liskov’s substitutability principle, if the “property” we consider is “the result of calling ReadInt32 when the next four bytes of the underlying stream are 00, 00, 00, 01” for example. Not having read Liskov’s paper for myself (I really should, some time) I’m not sure whether this is the intended kind of use or not. More on that later.

The second problem is that it contradicts the documentation for BinaryReader. For example, the docs for ReadInt32 state: “BinaryReader reads this data type in little-endian format.” That’s a tricky bit of documentation to understand precisely – it’s correct for BinaryReader itself, but does that mean it should be true for all subclasses too?

When I’ve written in various places about the problems of inheritance, and why if you design a class to be unsealed that means doing more design work, this is the kind of thing I’ve been talking about. How much detail does it make sense to specify here? How much leeway is there for classes overriding ReadInt32? Could a different implementation read a “compressed” Int32 instead of always reading four bytes, for example? Should the client care, if they make sure they’ve obtained an appropriate BinaryReader for their data source in the first place? This is basically the same as asking how strictly we should apply Liskov’s substitutability principle. If two types are the same in every property, surely we can’t distinguish between them at all.

I wonder whether most design questions of inheritance basically boil down to defining which properties should obey Liskov’s substitutability principle and which needn’t, for the type you’re designing. Of course, it’s not just black and white – there will always be exceptions and awkward points. Programming is often about nuance, even if we might wish that not to be the case.

Blow it, let’s do it anyway…

Coming back to BinaryReader, I think (unless I can be persuaded otherwise) that the benefits from going against the documentation (and strict substitutability) outweigh the downsides. In particular, BinaryReaders don’t tend to be passed around in my experience – the code which creates it is usually the code which uses it too, or it’s at least closely related. The risk of breaking code by passing it a BinaryReader using an unexpected endianness is therefore quite low, even though it’s theoretically possible.

So, am I miles off track? This is for a class library, after all – should I be more punctilious about playing by the rules? Or is pragmatism the more important principle here?

Book Review: C# 2008 and 2005 Threaded Programming: Beginner’s Guide

Note: The author of this book has requested that I remove their name from this blog post. I have done so in accordance with their wishes, editing comments as well.

Update (19th March 2009)

Debate around this review is getting heated. I stand by all the points I make about the text, but I’d like to clarify a few things:

  • If there are any ad hominem comments in the review against the author, please ignore them. I’m going to try to weed out any that I find, but if you spot one, please let me know and then ignore it. I feel very strongly that a review should be about the text of a book, not about its author. The text is what will inform the reader, not the author’s other work. I’m aware that the author has written many other books, and is generally well-regarded (as far as I can tell, anyway). That neither helps nor hinders the text. The same goes for me and the review, of course. Whether you know me or not, whether you’ve read anything else I’ve written or not, the review should stand on its own merits. This is not a popularity contest – it’s a discussion about a technical book.
  • The impression I’ve given in the review is almost entirely negative. This is because that’s the impression I received as a reader interested in accuracy and best practices. That does not mean that the book is entirely inaccurate – far from it. There are plenty of aspects where I have no particular issues with the accuracy. (The code style is more uniformly disagreeable to me, but that’s a subjective matter.) However, there is enough inaccuracy (and bad practice, in my view – somewhat subjective, but less so than the code style) to make that the dominant impression left with me, alongside my surprise that there’s no proper discussion of locking. As an analogy, imagine you go to a choral concert. Suppose the sopranos, altos and tenors are all perfectly in tune, but the basses are out of key the whole time. In some senses the concert would be 75% accurate – but the 25% inaccuracy would be enough to ruin it. So it is with technical books (not just this one) – it only takes a relatively small degree of inaccuracy to make the difference between a good book and a bad one. The bottom line is: even I don’t think everything or even most of what’s written in the book is wrong; there are enough problems to make me dislike it though.
  • I’ve made a few minor edits to the review just now, to address a few comments made so far. If some of the comments appear to be odd, that may be why!

Resources

  • Publisher’s page (Packt) – this is the cheapest way to buy the book as far as I can see
  • Sample code (49MB download! Mostly because it contains bin/obj directories for all solutions…)
  • Amazon or Barnes and Noble links if you don’t want to buy it directly
  • John Mueller’s review – a much more positive review than this one, which may prove an interesting counterbalance for readers. (Thanks to Erik for pointing out John’s review in the comments.)

Disclaimer

This book doesn’t really compete with C# in Depth, but obviously the very fact that it’s another book about C# at all means I’m probably not entirely unbiased. Arguably it also “competes” with my own (somewhat out of date now) threading article, although that’s not a monetary venture for me. I should also point out that my copy was sent to me for free, specifically for review, by Packt Publishing.

Audience and content

The book claims that “Where you are a beginner to working with threads or an old hand who is looking for a reference, this book should be on your desk.” In practice, I don’t think it’s really suitable as a reference. The kind of information you really want as a reference is hard to find amidst the bulk of the book, which is on-going examples. For the rest of this review I’ll regard the intended audience as just beginners.

The first chapter (out of 12; at 388 pages one of the nice things about this books it that it’s relatively slim) is introductory material about threads and processes, and why concurrency is important in the first place. After this one code-free chapter, the rest of the book is all example-based. The pattern goes something like this:

  • Give rough idea of what we’re building
  • Create first version of the code
  • Explain what it does and why it may not be ideal
  • Improve it
  • Explain how the improvements work
  • Move to next example or add new major feature

That sounds all very well, but I’ll get to my issues with it in a minute. Although the examples are constantly evolving, they essentially break down into these applications:

  • “Code cracking” (brute-forcing a 4 character code)
  • “Encrypting” SMS messages (not real encryption – no key – but a general CPU-intensive transformation)
  • Image processing to find and highlight “old stars” in NASA images
  • “Encrypting” several files
  • More image processing – adjusting the brightness of a large image and thumbnailing it

These are all Windows Forms applications, and are frankly pretty similar, all basically dealing with simple, embarrassingly parallel tasks. That’s not to say the author doesn’t get a fair set of different techniques and lessons out of them:

  • Keeping the UI thread free (and seeing what happens when you don’t)
  • Tips for debugging multi-threaded apps in Visual Studio
  • Showing the performance for individual processes using Task Manager and Windows Explorer
  • Using BackgroundWorker to update the UI
  • Queuing tasks in the system thread pool
  • Creating new threads explicitly
  • Using Control.Invoke/BeginInvoke to update the UI (although this comes very late in the book – chapter 10 out of 12)
  • Keeping tasks independent
  • Noting that sharing data between threads is difficult – but coming to the wrong conclusions (more later)
  • Using the Timer component (just the WinForms timer; not System.Threading.Timer, System.Web.UI.Timer or System.Timers.Timer) – although later on he uses a BackgroundWorker for a task much more suitable for a Timer.
  • A bit of OO design, although in a pretty botched way – the idea of having a general-purpose “parallel algorithm” class and a “parallel algorithm piece” class is reasonable, but it isn’t handled nearly as well as it might be
  • Fairly disastrous advice (IMO) about both I/O and the GC
  • Exception “handling” (where “swallowing exceptions and just reporting them with Debug.Print” counts as “handling” apparently)
  • Parallel Extensions from .NET 4.0, with both PLINQ and TPL

Unfortunately, this misses out some of the most important concepts in parallelism on .NET. The author frequently mentions locking, but only ever in a “we’re avoiding doing it” way. I find it absolutely incredible that a book on multi-threading in C# doesn’t even mention the “lock” keyword. Okay, it’s nice to be able to split tasks up completely independently where possible, but in the real world you sometimes have to use shared mutable state (or at least, it’s often the simplest approach).

When I first got the book, I looked up several entries in the index to see how they’d be handled. I was shocked to find that none of these have an index entry:

  • lock
  • volatile
  • memory model
  • Monitor
  • Wait or Pulse
  • BeginInvoke or Invoke
  • double-checked locking
  • mutable or immutable

The concept of accessing state from multiple threads is glossed over for the entirety of the book. Basically whenever multiple threads want to make their results available, they put them in different elements of an array or list. There’s an assumption that if you read from that array/list in a different thread, it’s all okay. Likewise there’s an assumption that it’s appropriate to read integer variables written to in one thread from another thread without any locking, volatility or use of the Interlocked class. I’ll come back to this topic when I tackle accuracy later on.

Style

This is a very informal book: something I have no problem with. English clearly isn’t the author’s first language, and although I don’t blame him for some of the clumsy wording in the book (e.g. “We will not leave behind the necessary pragmatism in order to improve performance within a reasonable developing time”) I do wish the book’s editorial team had done a better job in that respect. It’s tricky with technical books: non-technical editors have good reason to be wary of going too far, as small changes in wording can have make a large difference semantically, but it does make a big difference to a book’s readability when the language is clear and idiomatic. (As a side note, I feel incredibly fortunate to have English as my native tongue. I’m not fluent in any foreign languages, and I’m often amazed at how well others manage.)

There are other elements of the style of the book which I have much more of a problem with. The first is the way that the examples are handled. A very large proportion of the book is just lists of instructions: “add some using directives: <code>; add these variables: <code>; add this procedure: <code>; add another procedure <code>; add an event handler <code>” with just a sentence or two of explanation for each one as you go. There’s much more explanation after all the code has been added, but the way that the code is given makes it very hard to see what’s going on. We almost never get to see a whole class in one listing – it’s always broken up into using directives, variables, individual methods etc. This may not be too bad if you’re following along with the book at every single point, but it makes it very hard to just read. As a friend has commented, this content might work a lot better as a screencast, rather than as a book.

One detailed gripe: nearly every time a property is introduced, the author uses the phrase “we want to create a compact and reliable class” as a justification. There’s no explanation, and quite often the properties are mutable for no good reason (when a genuinely reliable class in a multi-threaded setting would be immutable). After a while it made me want to grind my teeth every time I saw it.

The feeling is very much that of a Head First book, but one which doesn’t work. For all my misgivings about Head First C# (which I believe is now very much better now that a large number of errors have been removed) the general style was very well handled. It’s not my preferred style to start with (particularly focusing on large GUIs instead of short, complete console apps) but I rarely felt particularly lost in the listings – there was usually enough context to hold onto. Here, I feel there’s very little context at all. If you accidentally miss out a step, you’ll have a really hard time working out which one it is or what’s wrong.

On top of this, there’s the bizarre storyline “explaining” all the listings. Apparently you (the reader) originally started out cracking a code, then got hired by some other crackers, then the FBI, then NASA. We are told of FBI agents getting us capuccinos, the NASA CIO wanting you to use the Parallel Extensions CTP so that they can get free licences for Visual Studio 2010 and all kinds of other oddities. We are constantly bombarded with plaudits about our threading capabilities – by the last chapter we’re regularly being called “experts” and “threading gurus” despite the fact that we wouldn’t have a clue what was going on if someone presented us with some code using a “lock” statement. This is all patronising in the extreme – and again, Head First C# (and  I suspect the rest of the Head First series) handles the “keep it informal but drive the topic forward” aspect a lot more successfully.

Finally, on the topic of style, I’d like to rant a bit about the coding style. It’s awful. Really awful. I realise that coding standards are to some extent a personal thing, but I object to code like this:

  • Pseudo-Hungarian (the type which uses “o” as a prefix for almost any object; not the type Peter likes) and the nature of every variable (local, parameter or instance variable) makes for horrendous variable names such as “prloOutputCharLabels”. It’s not even consistent – variables added by the designer only get a type designation prefix (lbl, but, pic) but no nature prefix. Aargh.
  • Methods are frequently camel-cased instead of Pascal-cased, e.g. “showFishes” and “checkCodeChar”. It’s possible that this is only true for private methods – a very quick flick through doesn’t reveal any public ones like this – but if so it’s inconsistently applied as there are certainly Pascal-cased private methods too. Some public properties combine both annoyances so far, with names such as “poThread” and “piBegin”.
  • Most (but not all) of the time the author declares all of a method’s variables at the top, even if they’re not used for a long time. This includes declarations of variables for use in loops. This took me right back to the 80s, writing ANSI C again. I believe that the ability to declare variables at the point of first use gives a significant improvement in readability. It’s easier to see where a variable will be used if its scope is limited, for example.
  • Using directives aren’t applied nearly thoroughly enough, leaving lots of explicit use of System.Diagnostics, System.Drawing, System.ComponentModel etc. Given the line length limitations in printed books, this is a real killer in terms of providing compact, readable code.
  • Speaking of line length limitations, it would be really useful to actually acknowledge them – if a comment is going to span two printed lines, starting just the first one with “//” and leaving the second indented but not really a comment isn’t a good idea.

So, we’ve got code broken up into chunks which breaks the flow of the code, and I don’t even like the style of the code. Still, I could live with that if it’s good quality code…

Accuracy and best practices

I’ve already indicated one of the significant problems I have with the book in terms of content: its complete absence of discussion about shared data and locking. Yes, this is a beginner’s book, and I wasn’t expecting the level of detail on the memory model which is present in Joe Duffy’s book (which I promise I’ll review soon) – but I’d certainly prefer to err on the side of safety. The book regularly just accesses data on one thread having written it on another, with no locking, volatility or use of Interlocked. This isn’t the sole bad practice, however, and it’s not limited to stylistic choices either. In the course of the book, we are told all of the items below (and more). Italics indicate what the book claims; regular type indicates my response. These aren’t verbatim quotes, but paraphrase:

  • Forcing garbage collection before starting a multi-threaded operation is a good idea. This is given as a sort of response to a screenshot of Process Explorer showing ugly memory usage. In fact, I can’t reproduce the kind of nasty graph that’s shown in the book, even with the code downloaded from the web site, but if I did see that there are definitely better ways of addressing it than forcing garbage collection. Disposing of Bitmaps appropriately would be a good start… as it is, each bitmap is going to hang around for at least one garbage collection cycle longer than it should, because we’ve got to wait for its finalizer to be executed. Making sure you dispose of objects appropriately is always a good idea – explicitly forcing the garbage collector is almost always a bad one. (Not absolutely always, but usually.)
  • WaitHandle.WaitAll has to run on an MTA thread – so let’s just change the [STAThread] line above the Main method to [MTAThread], with no warning that it’s a really bad idea to do that for Windows Forms. (Side-note: when trying to check that there really isn’t a warning, I had to spend a long time finding the section. The index doesn’t contain entries for MTAThread, STAThread, apartment, WaitHandle or WaitAll. In general the index could do with a lot of work. I’m painfully aware that indexing is a horrible task, but it’s important.)
  • Application.DoEvents() is a way of letting the UI process events. This is true – it’s also another really bad idea unless you absolutely have to use it. Re-entrancy is hard to debug – and not mentioned at all in the book, as far as I can tell.
  • Data streaming is wasteful, because two threads might both want to do I/O at the same time – it’s a better idea for each thread to load all the data it needs to and then start processing it. This is stated in a context where streaming is ideal – each thread just needs to process every line in a file. (Each thread is asked to process a different file.) There’s no dependency between the lines of the file. It’s an absolute gift – the buffering and pre-fetch techniques of Windows would guess we needed the next block of data before we ask for it, so the disk would be seeking while we’re encrypting, on each thread. At least, I strongly suspect it would – and I would profile the thing instead of just claiming that we’ve managed to avoid an I/O bottleneck by loading files in their entirety up-front. No mention is made of the fact that as soon as a bunch of big files are queued for encryption, you’ll have a bunch of threads all trying to load everything before they bother starting to do anything. Avoiding I/O contention is a tricky topic, and it deserves better than a couple of misleading paragraphs with no attempt at explaining what the benefits of streaming the data would be.
  • The thread pool is used to queue threads with work to do. If there are already lots of threads busy, the new threads will wait until the old ones have finished. Note the use of “threads” here – not tasks to run on a pool of existing threads, but threads. This would make the thread pool pointless – what is never explained in the book is that creating threads is a relatively expensive business, and you don’t want to do it repeatedly for short-lived tasks when you could instead create a pool of threads and reuse them to run several tasks. Once this purpose is clear, the notion of queuing threads becomes obviously wrong.
  • We can pass some state into the delegate used for work item queuing (or a ParameterizedThreadStart) and use that to give us some context. We need to cast that state to the relevant type before we can use it, because it’s just typed as System.Object. So far so good – except most of the time, the author ends up passing into the work item the same reference which would be available as just “this” within the method itself. So we have code such as:
loPiece.poThread = new Thread(new ParameterizedThreadStart(loPiece.ThreadMethod));

loPiece.poThread.Start(loPiece);
  • The ThreadMethod method then duly casts its parameter to its own type and uses it. All of this is pointless, as the method doesn’t need any parameters – it can just use “this” inside the method.
  • It’s very important to initialize lists with the right capacity. Again, this isn’t too bad as far as it goes – except that this micro-optimisation goes awry when he reads the TextBox.Lines property twice: once to work out the appropriate capacity and once to fill the list with initial data. Unfortunately the TextBox.Lines property has to take the existing text in the TextBox, split it (creating a bunch of substrings) and get the result into an array. This in turn means doing all the normal shenanigans associated with creating buffers which are bigger than you need, filling them, copying to a new buffer etc – exactly what we’re trying to avoid! This “optimization” will usually cost time instead of saving it. It could be easily fixed by just fetching the array in one statement, then using the same array for both the count and the list population. In fact, if you just pass the array into the List<T> constructor, it will perform the optimization for you – it can detect that it’s an ICollection<T> and use the Count property directly. Writing the simplest code actually ends up being optimal.
  • The above bullet point isn’t going to dominate the performance of that example though – there’s a potentially far worse effect due to the way the resulting “encrypted” string is broken up each time: using string concatenation in a loop. I guess we’d better hope there are no really long lines. If an author is going to give optimisation “tips” they need to be a lot more rigorous than this. Using string concatenation in a loop is probably the single best-known performance no-no in .NET. I was really shocked to see this in a book which is supposedly about making your code perform better. Now, it could be that string concatenation was used deliberately to slow things down – but in that case, why not highlight it? Drawing attention to intended optimizations gives the impression that the rest of the code is either optimized or has at least been written reasonably. If “bad” code is to be used for a specific purpose, that should be called out so that the reader won’t go onto use the same kind of code in their own production apps (which really shouldn’t be deliberately slowed down).

These aren’t the only issues I have with the code. Unicode is abused by “encrypting” text with no discussion of whether the strings he produces are valid or not (as opposed to the normal practice of only encrypting data after first converting it into binary; the encrypted binary might then be converted to text using base64 if you need to transmit the encrypted data as text). We could easily end up with strings containing surrogate high or low code points without the corresponding half in the appropriate place. When analyzing a bitmap he uses GetPixel and SetPixel for each pixel, rather than calling LockBits once and then accessing the image data in a much faster manner. (The code given does scale, but it’s not as fast as it could be. Using LockBits it would still scale, but the “per thread” work would be faster.) There are other, similar issues lurking in the text, but I’m sure you get the gist of the problem.

Conclusion

Believe it or not, there are things about this book that I actually like. It’s relatively thin, which has very tangible advantages when you’re carrying it around a lot. The sections explaining has to use Process Explorer and Task Manager to their best are useful, and the ideas of the examples are good – even though they basically cover the same ground several times. Unfortunately the bad points outweight the good far too heavily. To summarise them:

  • What I consider some of the absolute core elements of .NET multithreading (locking and monitors in particular) aren’t covered at all
  • Only the simple situation of an embarrassingly parallel algorithm is covered. In the real world developers will have to face real challenges where tasks don’t always split themselves up nicely into totally independent chunks. A reader who finishes this book assured that they are now threading gurus will face a nasty shock.
  • Server-side threading isn’t given much coverage at all, despite this being arguably the most likely environment for developers to encounter multithreading
  • The “story” element of the prose style is childish and patronising
  • The coding style, while a personal choice, makes me wince – and is particularly verbose for a book, where space is important
  • Many bad practices are encouraged, and there are plenty of important misunderstandings to trip up readers
  • The index has failed me (even when I’ve known that the subject is in the book) more times than it’s helped me

It’s a real pity. I was hoping this would be a book I could recommend to people as a precursor to reading Joe Duffy’s excellent Concurrent Programming on Windows. Instead, my current best advice is to read Joe Albahari’s threading tutorial. (I previously had a link to my own threading tutorial as well, but apparently this made people think I was fishing for more readers of that.) I’m sure there are good introductory threading books out there, but I’m afraid this isn’t one of them.

Current book project: Real World Functional Programming

Now that it’s all official, I can reveal that since the tail end of last year I’ve been helping out with Real World Functional Programming by Tomáš Petříček (and me, ish).

I’m doing the same kinds of things I did with Groovy in Action, coming to the text with fresh eyes: I’m technically competent but don’t have any expertise in the subject matter. This means I can see it as a “real” reader would, challenge the assumptions, point out places which need more explanation etc. I’m also polishing the language a bit and adding a few informative notes. I suspect by the end there won’t be a single paragraph that I’ve written in its entirety, but there’ll also be few paragraphs which I haven’t touched at all.

It’s exciting to get a chance to work on the book, and I hope I’m improving it. Check it out, join the MEAP, and give us feedback :)

I have a couple of other books on the back-burner at the moment too; hopefully I’ll be able to give more details reasonably soon.