Category Archives: Book reviews

Book review: Pro LINQ – Language Integrated Query in C# 2008, by Joe Rattz

I’m trying something slightly different this time. Joe (the author) has reacted to specific points of my review, and I think it makes sense to show those reactions. I’d originally hoped to present them so that you could toggle them on or off, but this blog server apparently wants to strip out scripts etc, so the comments are now permanently visible.

Resources

Introduction and disclaimer

As usual, I first need to give the disclaimer that as the author of a somewhat-competing book, I may be biased and certainly will have different criteria to most people. In this case the competition aspect is less direct than normal – this book is “LINQ with the C# 3 bits as necessary” whereas my book is “C# 2 and 3 with LINQ API where necessary”. However, it’s still perfectly possible that a potential reader may try to choose between the two books and buy just one. If you’re in that camp, I suggest you buy my book try to find an impartial opinion instead of trusting my review.

A second disclaimer is needed this time: I didn’t buy my copy of this book; it was sent to me by Apress at the request of Joe Rattz, specifically for review (and because Joe’s a nice guy). I hope readers of my other reviews will be confident that this won’t change the honest nature of the review; where there are mistakes or possible improvements, I’m happy to point them out.

Content, audience and overall approach

This book is simply aimed at existing C# developers who want to learn LINQ. There’s an assumption that you’re already reasonably confident in C# 2 – knowledge of generics is taken as read, for example – but there is brief coverage of using iterator blocks to return sequences. No prior experience of LINQ is required, but the LINQ to XML and LINQ to SQL sections assume (not unreasonably) that you already know XML and SQL.

The book is divided into five parts:

  • Introduction and C# 3.0 features (50 pages)
  • LINQ to Objects (130 pages)
  • LINQ to XML (152 pages)
  • LINQ to DataSet (42 pages)
  • LINQ to SQL (204 pages)

The approach to the subject matter changes somewhat through the book. Sometimes it’s a concept-by-concept “tutorial style” approach, but for most of the book (particularly the LINQ to Objects and LINQ to XML parts) it reads more like an API reference. Joe recommends that readers tackle the book from cover to cover, but that falls down a bit in the more reference-oriented sections.

[Joe] Early in the development of my book, a friend asked me if it was going to be a tutorial-style book or a reference book. I initially found the question odd because I never really viewed books as being exclusively one or the other. Perhaps I am different than most readers, but when I buy a programming book, I usually read a bit, start coding, and then refer to the book as a reference when needed. This is how I envision my book being used by readers and the type of book I would like for it to be. I see it as both a tutorial and a reference. I want it to be a book that gets used repeatedly, not read once and shelved. Some books work better for this than others. I rarely read a programming book cover to cover because I just don’t have time for that. I think ultimately, most authors write the book they would want to read, and that is what I did. I hope that if someone buys my book, in two years it will be tattered and worn from use as a reference, as well as read cover to cover.

I would disagree that the majority of the book reads like an API reference. Certainly, chapters 4 and 5 (deferred and nondeferred operators) work better as a reference because there isn’t a lot of connective context between the approximately 50 different standard query operators. At best it would be an eclectic tutorial with little continuity. So I decided to make those two chapters (the ones covering the standard query operators) function more like a reference. I knew that I (and hopefully my readers) would refer to it time and time again for information about the operators, and based on most of the reviews I have seen, this appears to have been a good choice. I know I refer to it myself quite frequently. I would not consider the chapters on LINQ to XML to be reference oriented although I could see why someone might feel they are. My discussion of LINQ to XML is tutorial based as I approach the different tasks a developer would need to accomplish when working with XML, such as how to construct XML, how to output XML, how to input XML, how to traverse XML, etc. However, within a task, like traversing XML, I do list the API calls and discuss them, so this is probably why it feels reference-like to some readers, and will function pretty well as a reference.

For example, take the ordering operators in LINQ – OrderBy, ThenBy, OrderByDescending and ThenByDescending. (Interestingly, one of the Amazon reviews picks up on the same example. I already had it in mind before reading that review.) These four LINQ to Objects operators take 15 pages to cover because every method overload is used, but a lot of it is effectively repeated between different examples. I think more depth could have been achieved in a shorter space by talking about the group as a whole – we only really need to see what happens when a custom comparison is used once, not four times – whereas every example of ThenBy/ThenByDescending used an identity projection, instead of showing how you can make the secondary ordering use some completely different projection (without necessarily using a custom comparer). Likewise I don’t remember seeing anything about tertiary orderings, or what the descending orderings tend to do with nulls, or emphasis on the fact that descending orderings aren’t just reversed ascending orderings (due to the stability of the sort – the stability was mentioned, but not this important corollary). Having an example for each overload is useful for a reference work, but not for a “read through from start to finish” book.

The set operators (Distinct, Except, Intersect, Union and SequenceEqual) as applied to DataSets suffer a similar problem – the five descriptions of why custom comparers are needed are all basically the same, and could be dealt with once. In particular, one paragraph is repeated verbatim for each operator. Again, that’s fine for a reference – but cutting and pasting like this makes for an irritating read when you see the exact same text several times in one reading session.

[Joe] A few readers have complained about some of the redundancies that you have pointed out, but I think most of the readers have appreciated my attempt to provide material for each operator/method. I think one of the words you will see most often in the Amazon reviews is “thorough”.

Now, it’s important that I don’t give the wrong impression here. This is certainly not just a reference book, and there’s enough introduction to topics to help readers along. If I’d been coming to C# 3 and LINQ without any other information, I think I’d have followed things, for the most part. (I’m not a fan of the way Joe presented the query expression translations, but I’m enormously pleased that he did it at all. I think I might have got lost at that point, which was unfortunately early in the book. It might have been better as just an appendix.) Anyone reading the book thoroughly should come away with a competent knowledge of LINQ and the ability to use it profitably. They may well be less comfortable with the new features of C# 3, as they’re only covered briefly – but that’s entirely appropriate given the title and target of the book. (To be blunt and selfish, I’m entirely in favour of books which leave room for more depth at a language level – that should be a good thing for sales of my own book!)

[Joe] Jon, if you only knew how difficult it was getting those query expression translations into the book. ;-) You can read in my acknowledgments where I specifically thank Katie Stence and her team for them. They were a very painful effort and in hindsight, I probably would not include them if I were to start the book from scratch. I agree with you that the translations are complex, as the book states. Perhaps the most important part of that section is when I state “Allow me to provide a word of warning. The soon to be described translation steps are quite complicated. Do not allow this to discourage you. You no more need to fully understand the translation steps to write LINQ queries than you need to know how the compiler translates the foreach statement to use it. They are here to provide additional translation information should you need it, which should be rarely, or never.”

However, I would personally have preferred to see a more conceptual approach which spent more time focused on getting the ideas through at a deep level and less time making sure that every overload was covered. After all, MSDN does a reasonable job as a reference – and the book’s web site could have contained an example for every overload if necessary without everything making it into print. The kind of thing I’d have liked to see explored more fully is the buffering vs streaming nature of data flow in LINQ. Some operators – Select and Where, for example – stream all their data through. They never keep look at more than one item of data at a time. Others (Reverse and OrderBy, for example) have to buffer up all the data in the sequence before yielding any of it. Still others use two sequences, and may buffer one sequence and stream the other – Join and Intersect work that way at the moment, although as we saw in my last blog post Intersect can be implemented in a way which streams both sequences (but still needs to keep a buffer of data it’s already seen). When you’re working with an infinite (or perhaps just very large – much bigger than memory) sequence you really need to be aware of this distinction, but it isn’t covered in Pro LINQ as far as I remember. In the interests of balance, I should point out that the difference between immediate and deferred execution is explained, repeatedly and clearly – including the semi-immediate execution which can occur sometimes in LINQ to SQL.

[Joe] I wanted my book to cover each overload because I can’t read MSDN in the bathroom, or when at the beach without an internet connection, or when curled up in a chair by the fireplace. I also wanted to provide examples for every method and overload because I find it frustrating when a book shows the simplest one and I have to figure out the one I need. Granted, depth could be added too, but you have to draw the line somewhere. Apress (at the time, not sure if this is still the plan) has the concept of three levels of book; Foundations, Pro, and Expert. I considered some information beyond the scope of the Pro level that my book is aimed at. The buffering versus streaming issue is an interesting one and would make an excellent additional column in Table 3-1, if I can get it to fit.

I’m unable to really judge the depth to which LINQ to SQL was explored, given that a lot of it was beyond my own initial knowledge (which is a good thing!). I’m slightly perturbed by the idea that it can be comprehensively tackled in a couple of hundred pages, whereas books on other ORMs are often much bigger and tackle topics such as session lifetimes and caching in much more depth. I suspect this is more due to the technologies than the writing here – LINQ to SQL is a relatively feature-poor ORM compared with, say, Hibernate – but a bit more attention to “here are options to consider when writing an application” would have been welcome.

Accuracy and code style

Most of Pro LINQ is pretty accurate. Joe is occasionally a bit off in terms of terminology, but that probably bothers most readers less than it bothers me. There are a few things which changed between the beta version of VS2008 against which the book was clearly developed and the release version, which affect the new features of C# 3. For instance, automatically implemented properties aren’t mentioned at all (and would have been much nicer to see in examples than public fields) and collection initializers are described with the old restrictions (the collection type has to implement ICollection<T>) rather than the new ones (the collection type has to implement IEnumerable and have appropriate Add methods). Other errors include trusting the documentation too much (witness the behaviour of Intersect) and an inconsistency (stating correctly that OrderBy is stable on one page, then incorrectly warning that it’s unstable on another). In my normal fashion, I’ll give Joe an exhaustive list of everything I’ve found and leave it up to him to see which he’d like to fix for the next printing, but overall Pro LINQ does pretty well. I suspect this may be partly due to covering a great deal of area but with relatively little depth and some repetition – Accelerated C# had a higher error rate, but was delving into more treacherous waters, for example.

[Joe] Since my book is not meant to be a C# 3.0 book, but rather a LINQ book, I only cover the new C# 3.0 features which were added to support LINQ. Since automatic properties were not one of those features, I do not cover them. You may notice that my chapter dedicated to the new C# 3.0 features is titled C# 3.0 Language Enhancements For LINQ. Just for your reader’s knowledge, the ordering is now specified to be stable. Initially it was unstable, and was later changed to be stable but I was told it would be specified to be unstable, but apparently at some point, the specification was changed to be stable. My book was updated but apparently I missed a spot.

Most of the advice given throughout the book is reasonable, although I take issue with one significant area. Joe recommends using the OfType operator instead of the Cast operator, because when a nongeneric collection contains the “wrong type of object,” OfType will silently skip it whereas Cast will throw an exception. I recommend using Cast for exactly the same reason! If I’ve got an object of an unexpected type in my collection, I want to know about it as soon as possible. Throwing an exception tells me what’s going on immediately, instead of hiding the problem. It’s usually the better behaviour, unless you explicitly have reason to believe that you will legitimately have objects of different types in the collection and you really want to only find objects of the specified type.

[Joe] Yes, I should have known better than to provide that advice (prefer OfType to Cast) without more explanation, more disclaimers, and more caveats. My preference would be to use Cast in development and debug built code for the exact reasons you mention, but to use OfType in production code. I would prefer my applications to handle unexpected data more gracefully in production than I would in development.

As well as “headline” pieces of advice which are advertised right up to the table of contents, there are many hints and tips along the way, most of which really do add value. I believe they’d actually add more value if they weren’t sometimes buried within reference-like material – but as we’ve already seen, my personal preference is for a more narrative style of book anyway.

The code examples are in “snippet” form (i.e. without using directives, Main method declarations etc) but are complete aside from that. At the start of each chapter there’s a detailed list of which namespaces and references are involved, so there’s no guesswork required. In fact, I’d expect most of them to work in Snippy given an appropriate environment. Some examples are a bit longwinded – we only really need to see the 7 lines showing the list of presidents once or twice, not over and over again – but that’s a minor issue. Another niggle is Joe’s choices when it comes to a few bits of coding convention. There are various areas where we differ, but a few repeatedly bothered me: overuse (to my mind) of parentheses, “old-style” delegate creation (i.e. something.Click += new EventHandler(Foo) instead of just something.Click += Foo) and the explicit specification of type parameters on LINQ operators which don’t need them. Here’s one example which demonstrates the first and the last of these issues – as well as introducing an unnecessary cast:

// This is the code in the book (in listing 7-30)
XElement outOfPrintParticipant = xDocument
  .Element(“BookParticipants”)
  .Elements(“BookParticipant”)
  .Where(e => ((string)((XElement)e).Element(“FirstName”)) == “Joe”
           && ((string)((XElement)e).Element(“LastName”)) == “Rattz”)
  .Single<XElement>();

// This is what I’d have preferred
XElement outOfPrintParticipant = xDocument
  .Element(“BookParticipants”)
  .Elements(“BookParticipant”)
  .Where(e => (string)e.Element(“FirstName”) == “Joe”
           && (string)e.Element(“LastName”) == “Rattz”)
  .Single();

Check out the penultimate line of the original – a whopping 5 opening brackets and 6 closing ones. This issue looks even worse to me when it’s used to make return and throw look like method calls:

// From GetStringFromDb (P388)
throw (new Exception(
            String.Format(“Unexpected exception executing query [{0}].”, sqlQuery)));

// (Insert more code here) – same listing

return (result);

These just look odd and wrong. Of course they’re perfectly valid, but not pleasant to read in my view. On a more minor matter, Joe tends to close SQL connections, commands etc with an explicit try/finally block instead of the more idiomatic (to my mind) using statement, but again that probably bothers me more than others.

The source code is all available on the web site, and it’s easy to find each listing. (The zip file is about 10 times larger than it needs to be because it contains all the bin/obj directories with all the compiled code in rather than just the source, but that’s a tiny niggle.)

Writing style

Joe’s writing style is very informal – or at least, while most of the text is in “normal” formal prose, there are plenty of informal pieces of writing there too. As readers of my book will know, I’m much the same – I try to keep things from getting too dry, despite that being the natural state for technical teaching. I have no idea how well I succeed for most readers, but Joe certainly manages. He occasionally takes it a little too far for my personal taste, usually around listing outputs. They’re often introduced as if Joe didn’t really know what the output would be, with a kind of “wow, it worked, who’d have thought?” comment afterwards. I suspect I’ve got some of this in my book too, but Joe lays it on a little too thickly for my liking. I don’t know whether it would be fairer to present a “medium-level” example of this rather than one which really grated, but this is the one (from page 257) made such an impression that I remembered it over 300 pages later:

This should output the language attribute. Let’s see:


language="English"

Groovy! I have never actually written the word groovy before. I had to let the spelling checker spell it for me.

Now, I really want to stress that that’s a “worst case” rather than the average case, and indeed many listings don’t have anything “cutesy” about them. I just wanted to give an example of the kind of thing that didn’t work for me.

[Joe] Let me see if I get this straight. So you are saying you got to learn something about LINQ and how to spell groovy, and it stuck for over 300 pages and you are upset? Man, you know how to spell groovy now, what’s the problem? 8-D Would it annoy you less if I told you that is a reference to Austin Powers? My book is riddled with references to movies and TV shows, and that one is for Austin Powers. Maybe you didn’t catch that, or maybe you don’t like Austin Powers, or maybe you just still don’t like it. One reader was irritated when I said “Dude, Sweet” because he didn’t recognize that as a reference to Dude, Where’s My Car. I have references to Office Space, Arrested Development, Bottle Rocket, Seinfeld, The Matrix, Wargames, Tron, etc. In fact, on page 455, I actually use the word “moo” instead of “moot” in reference to Friends. My copy editor actually corrected that for me, but once I explained it, she let me have it back. So if you see something goofy, like “groovy” just know it is a reference to something and begin your investigation in your spare time. And if you see an error, it is intentional to make sure you are paying attention. ;-) As you have already pointed out, technical writing can be dry. I made an effort to inject humor into the book in the form of references to pop culture, most specifically movies and television. Sometimes the reference is in a comment like “groovy”, and sometimes it’s in the sample data like a character’s name. Like any comedian, every joke or reference can’t be a hit with everyone. I will say though that I have heard more from those that recognized the references and appreciated them (which helps carry a reader through the lesser interesting parts) than I have from those that found them annoying.

What really did work was including hints and tips which explicitly said where Joe had received unexpected results with slightly different code. If anything is unexpected to the author, it may well be unexpected to readers too, so I really appreciated reading that sort of thing. (It would be wearing if Joe were stupid and expected all kinds of silly results, but that’s not the case at all.)

Conclusion

Pro LINQ is a good book. It has enough niggles to keep me from using superlatives about it, but it’s good nonetheless. It’s Joe’s first book (just like C# in Depth is the first one I can truly call “mine”) and I hope he writes more. Having read it from cover to cover, I think it’ll be more useful as a reference for individual methods (when MSDN doesn’t quite cut it) than to reread whole chapters, but that’s not a problem. My slight complaints above certainly don’t stop it from being a book I’m pleased to own.

[Joe] I’ll take it as a compliment that you think my book would be useful for those times that MSDN isn’t good enough!

This is the first LINQ book I’ve reviewed – I already have LINQ in Action, which is also on the list to review at some point. (I’ve read large chunks of it in soft copy, but I haven’t been through the finished hard copy yet.) It will be interesting to see how the two compare. Next up will probably be “Programming C# 3.0” by Jesse Liberty, however.

Book reviews – what do you look for?

I’ve just started writing the book review for “Pro LINQ – Language Integrated Query in C# 2008” and I wondered what people look for in a review. I’ve talked before about who is in the best position to write a review – but this is slightly different. In particular, what sort of balance do you want between totally factual aspects (what’s covered, the kinds of mistakes I found) and pretty subjective aspects (the writing style, quality of advice given)? Is a long and detailed review useful, or are you likely to just skip to the conclusion anyway?

I guess it’s worth answering my own question, partly in the hope that someone will write this kind of review for C# in Depth. (There are plenty of reviews, but not many in significant detail.) Here’s what I like to see:

  • A mixture of subjective opinions and objective facts
  • An example or two of the kind of technical errors found, and a rough idea of how often such errors occur
  • Who the book is aimed at, and more subjectively who it wouldn’t be useful for
  • A brief summary of what’s covered – and what’s not covered, if that’s relevant
  • A feeling of how well structured/ordered the book is – does it lead the reader through the technology, or jump around?
  • An idea of the author’s style – formal or informal, reference or tutorial, etc
  • Which aspects of that style irked the reader, and which worked well
  • Exampes of all of this! It’s one thing to say that a style annoys you – it’s another to give an example which will let the review’s reader judge for themselves.
  • How the author could improve, and their existing strengths
  • A final gut feeling of how much you like the book, despite/because of the above

Not all of these are suitable for all books, and I wouldn’t like to say that my own reviews have included all of them so far, but I think that’s what I’d appreciate reading. That suggests a fairly comprehensive review, of course – which is just what I’m after when making a reading decision.

I’d love to know what you think – it won’t be in time to affect the review I’m writing now, of course, but I’ll try to take comments into account for future reviews.

Book review: Accelerated C# 2008 by Trey Nash

Time for another book review, and this time it’s a due to a recommendation from a reader who has this one, C# in Depth and Head First C#.

Resources

Introduction and disclaimer

My normal book review disclaimer applies, but probably more so than ever before. Yes, Accelerated C# 2008 is a competitor to C# in Depth. They’re different in many ways, but many people would no doubt be in the target audience for both books. If you meet that criterion, please be aware that as the author of C# in Depth I can’t possibly be 100% objective when reviewing another C# book. That said, I’ll try to justify my opinions everywhere I can.

Target audience and content overview

Accelerated C# 2008 is designed to appeal to existing developers with experience in an OO language. As one of the Amazon reviews notes, you may struggle somewhat if you don’t have any .NET experience beforehand – while it should be possible to read it knowing only Java or C++, there are various times where a certain base level of knowledge is assumed and you’ll want to refer to MSDN for some background material. If you come at the book with no OO experience at all, I expect you’ll have a hard time. Chapter 4 does cover the basics of OO in .NET (classes, structs, methods, properties etc) this isn’t really a beginner’s book.

In terms of actual content covered, Accelerated C# 2008 falls somewhere between C# in Depth (almost purely language) and C# 3.0 in a Nutshell (language and then core libraries). It doesn’t attempt to cover all the core technologies (IO, reflection, security, interop etc are absent) but it goes into detail beyond the C# language when it comes to strings, exceptions, collections, threading and more. As well as purely factual information, there’s a lot of guidance as well, including a whole chapter entitled “In Search of C# Canonical Forms.”

General impressions

I’d like to make it clear to start with that I like the book. I have a number of criticisms, none of which I’m making up for the sake of being critical – but that in no way means it’s a bad book at all. It’s very unlikely that you know everything in here (I certainly didn’t) and the majority of the guidance is sound. The code examples are almost always self-contained (a big plus in my view) and Trey’s style is very readable. Where there are inaccuracies, they’re usually pretty harmless, and the large amount of accurate and insightful material makes up for them.

Just as I often compare Java to C# in my book, so Trey often compares C++ to C# in his. While my balance of C# to C++ knowledge is such that these comments aren’t particularly useful to me, I can see them being good for a newcomer to C# from a C++ background. I thought there might have been a few too many comparisons (I understood the point about STL and lambdas/LINQ the first time round…) but that’s just a minor niggle.

Where C# in Depth is primarily a “read from start to finish” book and C# 3.0 in a Nutshell is primarily a reference book (both can be used the other way, of course) Accelerated C# 2008 falls between the two. It actually achieves the best of both worlds to a large extent, which is an impressive feat. The ordering could be improved (more on this later on) but the general feeling is very good.

One quick word about the size of the book in terms of content: if you’re one of those people who judges the amount of useful content in a book on its page count, it’s worth noting that the font in this book is pretty small. I would guess that it packs about 25% more text per page than C# in Depth does, taking its “effective” page count from around 500 to 625. Also, the content is certainly meaty – you’re unlikely to find yourself skimming over loads of simple stuff trying to get to the good bits. Speaking of “getting to the good bits” let’s tackle my first significant gripe.

Material organisation

If you look at the tables of contents for Accelerated C# 2008 and Accelerated C# 2005, you’ll notice that the exact same chapter titles in the 2005 edition carry over in the same order in the 2008 edition. There are three extra chapters in the new edition, covering extension methods, lambda expressions and LINQ. That’s not to say that the content of the “duplicate” chapters is the same as before – C# 3.0 features are introduced in the appropriate place within existing chapters. In terms of ordering the chapters, I think it would be have been much more appropriate to keep the last chapter of the old edition – “In Search of C# Canonical Forms” – as the last chapter of the new edition. Apart from anything else, that would allow it to include hints and tips involving the new C# 3 features which are currently covered later. It really feels like a “wrapping up” chapter, and deserves to be last.

That’s not the only time that the ordering felt strange, however. Advanced topics (at least ones which feel advanced to me) are mixed in with fairly basic ones. For instance, in the chapter on exceptions, there’s a section about “exception neutrality” which includes details about constrained execution regions and critical finalizers. All interesting stuff – even though I wish there were more of a prominent warning saying, “This is costly to both performance and readability: only go to these lengths when you really, really need to.” However, this comes before a section about using try/finally blocks and the using statement to make sure that resources are cleaned up however a block is exited. I can’t imagine anyone who knows enough C# to take in the exception neutrality material also not knowing about try/finally or the using statement (or how to create your own custom exception class, which comes between these two topics).

Likewise the chapter which deals with collections, including generic ones, comes before the chapter on generics. If I were a reader who didn’t know generics already, I think I’d get very confused reading about ICollection<T> without knowing what the T meant. Now don’t get me wrong: ordering material so that you don’t get “circular references” is often hard if not impossible. I just think it could have been done better here.

Aiming too deep?

It’s not like me to criticise a book for being too deep, but I’m going to make an exception here. Every so often, I came away from a topic thinking that it would have been better covered a little bit more lightly. Sometimes this was because a running example became laborious and moved a long way from anything you were actually likely to want to do in real life. The sections on “borrowing from functional programming” and memoization/currying/anonymous recursion felt guilty of this. It’s not that they’re not interesting topics, but the examples picked didn’t quite work for me.

The other problem with going deep is that you really, really need to get things right – because your readers are less likely to spot the mistakes. I’ll give three examples here:

  • Trey works hard on a number of occasions to avoid boxing, and points it out each time. Without any experience in performance tuning, you’d be forgiven for thinking that boxing is the primary cause of poor performance in .NET applications based on this book. While I agree that it’s something to be avoided where it’s possible to do so without bending the design out of shape, it doesn’t deserve to be laboured as much as it is here. In particular, Trey gives an example of a complex number struct and how he’s written appropriate overloads etc to avoid boxing. Unfortunately, to calculate the magnitude of the complex number (used to implement IComparable in a manner which violates the contract, but that’s another matter) he uses Math.Pow(real, 2) + Math.Pow(img, 2). Using a quick and dirty benchmark, I found that using real * real + img * img instead of Math.Pow made far, far more difference than whether or not the struct was boxed. (I happen to think it’s more readable code too, but never mind.) There was nothing wrong with avoiding the boxing, but in chasing the small performance gains, the big ones were missed.

  • In the chapter on threading, there are some demonstrations of lock-free programming (before describing locking, somewhat oddly – and without describing the volatile modifier). Now, personally I’d want to discourage people from attempting lock-free programming at all unless they’ve got a really good reason (with evidence!) to support that decision – but if you’re going to do it at all, you need to be hugely careful. One of the examples basically has a load of threads starting and stopping, updating a counter (correctly) using Interlocked.Increment/Decrement. Another thread monitors the count and periodically reports it – but unfortunately it uses this statement to do it:

    threadCount = Interlocked.Exchange(ref numberThreads, numberThreads);

    The explanation states: “Since the Interlocked class doesn’t provide a method to simply read an Int32 value in an atomic operation, all I’m doing is swapping the numberThreads variable’s value with its own value, and, as a side effect, the Interlocked.Exchange method returns to me the value that was in the slot.” Well, not quite. It’s actually swapping the numberThreads variable’s value with a value evaluated at some point before the method call. If you rewrite the code like this, it becomes more obviously wrong:

    int tmp = numberThreads;
    Thread.Sleep(1000); // What could possibly happen during this time, I wonder?
    threadCount = Interlocked.Exchange(ref numberThreads, tmp);

    The call to Thread.Sleep is there to make it clear that numberThreads can very easily change between the initial read and the call to Interlocked.Exchange. The correct fix to the code is to use something like this:

    threadCount = Interlocked.CompareExchange(ref numberThreads, 0, 0);

    That sets numberThreads atomically to the value 0 if (and only if) its value is already 0 – in other words, it will never actually change the value, just report it. Now, I’ve laboured the explanation of why the code is wrong because it’s fairly subtle. Obvious errors in books are relatively harmless – subtle ones are much more worrying.

  • As a final example for this section, let’s look at iterator blocks. Did you know that any parameters passed to methods implemented using iterator blocks become public fields in the generated class? I certainly didn’t. Trey pointed out that this meant they could easily be changed with reflection, and that could be dangerous. (After looking with reflector, it appears that local variables within the iterator block are also turned into public fields.) Now, leaving aside the fact that this is hugely unlikely to actually bite anyone (I’d be frankly amazed to see it as a problem in the wild) the suggested fix is very odd.

    The example Trey gives is where originally a Boolean parameter is passed into the method, and used in two places. Oh no! The value of the field can be changed between those two uses, which could lead to problems! True. The supposed fix is to wrap the Boolean value in an immutable struct ImmutableBool, and pass that in instead. Now, why would that be any better? Certainly you can’t change the value within the struct – but you can easily change the field‘s value to be a completely different instance of ImmutableBool. Indeed, the breakage would involve exactly the same code, just changing the type of the value. The other train of thought which suggests that this approach would fail is that bool is already immutable, so it can’t be the mutability of the type of the field that causes problems. I’m sure there are much more useful things that Trey could have said in the two and a half pages he spent describing a broken fix to an unimportant problem.

Sorry, that was getting ranty for a bit… but I hope you understand why. Before concluding this review, let’s look at one chapter which is somewhat different to the rest, and which I’ve mentioned before:

In Search of C# Canonical Forms (aka “Design and Implementation Guidelines” :)

I’d been looking forward to this part of the book. I’m always interested in seeing what other people think the most important aspects of class design are. The book doesn’t go into much detail about abstract orientation (in this chapter, anyway – there’s plenty scattered through the book) but concentrates on core interfaces you might implement, etc. That’s fine. I’m still waiting for a C# book to be written to truly be on a par with Effective Java (I have the second edition waiting to be read at work…) but I wasn’t expecting it all to be here. So, was this chapter worth the wait?

Somewhat. I was very glad to see that the first point around reference types was “Default to sealed classes” – I couldn’t agree more, and the arguments were well articulated. Many other guidelines were either entirely reasonable or at least I could go either way on. There were a few where I either disagreed or at least would have put things differently:

  • Implementing cloning with copy constructors: one point about cloning which wasn’t mentioned is that (to quote MSDN) “The resulting clone must be of the same type as or a compatible type to the original instance.” The suggested implementation of Clone in the book is to use copy constructors. This means that every subclass must override Clone to call its own copy constructor, otherwise the instance returned will be of the wrong type. MemberwiseClone always creates an instance of the same type. Yes, it means the constructor isn’t called – but frankly the example given (performing a database lookup in the constructor) is a pretty dodgy cloning scenario in the first place, in my view. If I create a clone and it doesn’t contain the same data as the original, there’s something wrong. Having said that, the caveats Trey gives around MemberwiseClone are all valid in and of themselves – we just disagree about their importance. The advice to not actually implement ICloneable in the first place is also present (and well explained).
  • Implementing IDisposable: Okay, so this is a tough topic, but I was slightly disappointed to see the recommendation that “it’s wise for any objects that implement the IDisposable interface to also implement a finalizer […]” Now admittedly on the same page there’s the statement that “In reality, it’s rare that you’ll ever need to write a finalizer” but the contradiction isn’t adequately resolved. A lot of people have trouble understanding this topic, so it would have been nice to see really crisp advice here. My 20 second version of it is: “Only implement a finalizer if you’re holding on to resources which won’t be cleaned up by their own finalizers.” That actually cuts out almost everything, unless you’ve got an IntPtr to a native handle (in which case, use SafeHandle instead).
    • As a side note, Trey repeatedly claims that “finalizers aren’t destructors” which irks me somewhat as the C# spec (the MS version, anyway) uses the word “destructor” exclusively – a destructor is the way you implement a .NET finalizer in C#. It would be fine to say “destructors in C# aren’t deterministic, unlike destructors in C++” but I think it’s worth acknowledging that the word has a valid meaning in the context of C#. Anyway…
  • Implementing equality comparisons: while this was largely okay, I was disappointed to see that there wasn’t much discussion of inheritance and how it breaks equality comparisons in a hard-to-fix way. There’s some mention of inheritance, but it doesn’t tackle the issue I think is thorniest: If I’m asking one square whether it’s equal to another square, is it enough to just check for everything I know about squares (e.g. size and position)? What about if one of the squares is actually a coloured square – it has more information than a “basic” square. It’s very easy to end up with implementations which break reflexivity, simply because the question isn’t well-defined. You effectively need to be asking “are these two objects equal in <this> particular aspect” – but you don’t get to specify the aspect. This is an example where I remember Effective Java (first edition) giving a really thorough explanation of the pitfalls and potential implementations. The coverage in Accelerated C# 2008 is far from bad – it just doesn’t meet the gold standard. Arguably it’s unfair to ask another book to compete at that level, when it’s trying to do so much else as well.
  • Ordering: I mentioned earlier on that the complex number class used for a boxing example failed to implement comparisons appropriately. Unfortunately it’s used as the example specifically for “how to implement IComparable and IComparable<T>” as well. To avoid going into too much detail, if you have two instances x and y such that x != y but x.Magnitude == y.Magnitude, you’ll find x.CompareTo(y) == y.CompareTo(x) (but with a non-zero result in both cases). What’s needed here is a completely different example – one with a more obvious ordering.
  • Value types and immutability: Okay, so the last bullet on the value types checklist is “Should this struct be immutable? […] Values are excellent candidates to be immutable types” but this comes after “Need to boxed instances of value? Implement an interface to do so […]” No! Just say no to mutable value types to start with! Mutable value types are bad, bad, bad, and should be avoided like the plague. There are a very few situations where it may be appropriate, but to my mind any advice checklist for implementing structs should make two basic points:
    • Are you sure you really wanted a struct in the first place? (They’re rarely the right choice.)
    • Please make it immutable! Pretty please with a cherry on top? Every time a struct is mutated, a cute kitten dies. Do you really want to be responsible for that?

Conclusion

At the risk – nay, certainty – of repeating myself, I’m going to say that I like the book despite the (sometimes subjective) flaws pointed out above. As Shakespeare wrote in Julius Caesar, “The evil men do lives after them. The good is oft interred with their bones.” So it is with book reviews – it’s a lot easier to give specific examples of problems than it is to report successes – but the book does succeed, for the most part. Perhaps the root of almost all my reservations is that it tries to do too much – I’m not sure whether it’s possible to go into that much detail and cater for those with little or no previous C# experience (even with Java/C++) and keep to a relatively slim volume. It was a very lofty goal, and Trey has done very well to accomplish what he has. I would be interested to read a book by him (and hey, potentially even collaborate on it) which is solely on well-designed classes and libraries.

In short, I recommend Accelerated C# 2008, with a few reservations. Hopefully you can judge for yourself whether my reservations would bother you or not. I think overall I slightly prefer C# 3.0 in a Nutshell, but the two books are fairly different.

Reaction

I sent this to Trey before publishing it, as is my custom. He responded to all my points extremely graciously. I’m not sure yet whether I can post the responses themselves – stay tuned for the possibility, at least. My one problem with reviewing books is that I end up in contact with so many other authors who I’d like to work with some day, and that number has just increased again…

Judging a book by its cover (or title)

I’ve ranted about versioning before (and indeed in C# in Depth). I still believe that Microsoft didn’t do the world any favours when they introduced a relatively minor set of changes (just libraries, albeit important ones) with .NET 3.0 and a more major set of changes (languages, LINQ, core library improvements) with .NET 3.5. Using 2.5 and 3.0 would have made more sense, IMO. But never mind.

The fact is, people are confused about what version number applies to what. A number of people claim to be using C# 3.5 when they mean either C# 3.0 or .NET 3.5. (For a quick reference of what’s actually what, see my article on the issue.)

Okay, so far it’s the fault of Microsoft for being confusing and the fault of developers for not keeping up. Both of these are far more forgiveable in my view than being flat out wrong as many books are at the moment. I don’t believe this is any indication on the quality of the book itself (Accelerated C# 2008 is pretty good so far, for example) but I still think it’s pretty awful to make a title so confusing. So, here are some bad titles and others which use version numbers appropriately. (I’ve left out titles like Head First C# and C# in Depth which don’t specify version numbers.)

Bad

  • Professional C# 2008 (Wrox)
  • Pro C# 2008 and the .NET 3.5 Platform (Apress) (only partly incorrect, of course)
  • Murach’s C# 2008 (Mike Murach and Associates)
  • Accelerated C# 2008 (Apress)
  • Illustrated C# 2008 (Apress)
  • Pro LINQ: Language Integrated Query in C# 2008 (APress)
  • Pro ASP.NET 3.5 in C# 2008 (Apress) (again, only partially incorrect)
  • Beginning C# 2008 (Apress)
  • Beginning C# 2008 Databases (Apress)

Good

  • C# 3.0 in a Nutshell (O’Reilly)
  • Programming C# 3.0 (O’Reilly)
  • C# 3.0 Cookbook (O’Reilly)
  • C# 3.0 Design Patterns (O’Reilly)
  • C# 3.0 Pocket Reference (O’Reilly)
  • Pro C# with .NET 3.0 (Apress)
  • Beginning C# 3.0 (Wrox)
  • C# 3.0 Unleashed: With the .NET Framework 3.5 (Sams)

Okay

This is the “well, just about” list – because they’re referring to “Microsoft Visual C# 2008” instead of C# 2008, it’s referring to the IDE instead of the language. I think it’s better to name a book after the language instead of the tool you use to write in the language, personally…

  • Beginning Microsoft Visual C# 2008 (Wrox)
  • Microsoft Visual C# 2008 Step By Step (Microsoft)
  • Programming Microsoft Visual C# 2008 (Microsoft)
  • Microsoft Visual C# 2008 Express Edition: Build a Program Now! (Microsoft)

Notice a pattern? If anyone at Apress is reading this (unlikely, I know) – there’s no such thing as C# 2008“.

Rant over for the moment. With any luck I might be able to finish reading Accelerated C# 2008 fairly soon, and give a proper book review.

The trouble with book reviews

I’m currently reading two .NET books: Accelerated C# 2008 (Trey Nash) and Concurrent Programming on Windows (Joe Duffy). I will, in due course, post reviews here. However, the very act of thinking about the reviews has made me consider the inevitable inadequacies.

There tend to be two kinds of people reviewing technical books: those who’ve bought the book as a "regular punter" – who are aiming to learn something new. Then there are those who already know about the subject matter, but are reading the book mostly to review it. I realise there are people in-between (for whom the problems below aren’t such an issue) but these are the two camps this post addresses.

The purpose of a technical book is usually to impart information and wisdom. I would have left it at just information, but things like best practices don’t really count as "facts" as such – they are opinions and should be treated as such. So, there are two qualities that I look for in a book:

  • Is it accurate? Are the facts correct, and is the wisdom genuinely wise?
  • Is it a good teaching tool? How well is the information/wisdom transferred from author to reader?

I think it’s worth breaking these up, although there is significant overlap.

Accuracy

As I’ve noted before, I’m a stickler for accuracy. If I can spot a significant number of inaccuracies in the text, I find it hard to trust the rest. Now I generally don’t include grammatical errors, printing mistakes etc in this. While they make the book harder to read, they don’t typically leave me with a mistaken impression about the technology I’m trying to learn. There’s a blurring of the medium and the message when a book may be technically just about accurate, but still leaves an incorrect impression.

Now, the reader who has bought a book primarily to learn something new has little hope of judging accuracy. They can spot typos, and if the book is inconsistent or simply implausible that can become obvious – but subtle errors are likely to elude them. Just because an error is subtle doesn’t mean it’s unimportant, however. I know a reasonable amount about threading in .NET, but there’s a lot of Joe Duffy’s book which is new to me. He could have made dozens of mistakes in the text around Win32 threading, and I’d never know until it bit me. (For what it’s worth, I very much doubt that there are dozens of mistakes in the book.)

A reader who already knows the subject matter thoroughly is much more likely to spot the mistakes. However, they’re unlikely to be much good at judging the other major criterion…

Teaching efficacy

I can’t really remember much about how I learned to program, other than that it was over the course of several years. I started off with Basic on the ZX Spectrum, then moved on to C, then Java, then C#. Each experience built on the previous one. The way in which I learned C# wouldn’t have suited a non-Java programmer nearly as well as it suited me.

How can I possibly judge how well a book will teach a subject I already know? I can make some educated guesses about particularly confusing passages, and potentially critique the ordering of material (and indeed its inclusion/exclusion) but fundamentally it’s impossible to gauge it properly.

The people who don’t know the topic beforehand are likely to have a better idea, but it will still be flawed. In particular, you won’t know how well the material has sunk in until you’ve given yourself enough time to forget it. You won’t know how suitable the advice (wisdom) was until you’ve tried to follow it. You won’t know how complete the coverage is until you’ve used the technology in anger, preferably over the course of several projects. Even then it’s easy to miss stuff: if no-one on your team knew about iterator blocks and the C# book you were reading didn’t mention them, how would you know what you were missing?

Who should you trust?

This post has had a pretty depressing mood so far. That reflects my irritation with the whole topic – which isn’t to say I don’t enjoy reviewing books. I just have doubts as to their use. I do, however, have a few positive notes to end on, as well as some fairly self-evident advice:

  • If everyone likes a book, it’s probably good. Likewise unanimous disapproval is rarely a good sign.
  • When judging reviews, see if you can work out the context. Is the reviewer reading from a perspective of knowledge, or learning? If they’re criticising accuracy, they probably know what they’re talking about – but may not be a good judge of the style and teaching technique. If the review is mostly saying, "I learned C# from scratch in 20 minutes with the help of this fabulous book!" then you can guess that they at least believe they had a positive learning experience, but you should treat anything they say about accuracy and completeness with care.
  • Blogs tend to have more "expert" reviewers than ecommerce sites – although often bloggers will be encouraged to post reviews to Amazon as well.
  • Look for reviews which give specific praise/criticism. In particular if they give examples of teaching techniques, you will have more of an idea as to whether it’ll suit you. Reviews which basically say "I loved it!" or "That’s rubbish!" aren’t terribly informative.

On that note, I should probably stop. That’s another train journey gone that I should probably have spent actually reading… ah well. Please comment if you have other suggestions with regards to reviewing – particularly if it could help me to review books in a more useful way in the future.

Guest post: Joe Albahari reviews C# in Depth

Joe Albahari, co-author of the excellent C# 3.0 in a
Nutshell
(previously reviewed here) kindly agreed to review C# in Depth. Not only has he provided the review below,
but he also supplied several pages of notes made while he was reading it. Many of those
notes have been incorporated into the C# in Depth notes page – it’s always good to include thoughtful feedback. (And I always welcome more, hint hint.)

Without further ado, here’s Joe’s review.

C# in Depth: Review

After having been invited to
review this book by two people at Manning—as well as Jon himself—I
figure it’s about time I came forward! Bear in mind that I’m not
a typical reader: I’m an author, and this makes me more critical than
most. This is especially true given that I wrote C# 3.0 in a Nutshell
with a coauthor (imagine two people constantly searching for ways to
improve each others’ work!). So I will do my best to compensate and
strive to be fair. Please post a comment if you feel I’ve missed the
mark!

Scope

While most other C# books cover
the language, the CLR and at least some aspects of the Framework, C#
in Depth concentrates largely on just the language. You won’t find discussions
on memory management, assemblies, streams and I/O, security, threading,
or any of the big APIs like WPF or ASP.NET. This is good in that doesn’t
duplicate the books already out there, as well as giving more space
for the language.

You might expect that a book
focusing on the C# language itself would cover all of it. But interestingly,
the book covers only about a quarter of the C# language, namely the
features new to C# 2 and C# 3. This sets its target audience: programmers
who already know C# 1, but have not yet switched to C# 2 and 3. This
creates a tight focus, allowing it to devote serious space to topics
such as generics, nullable types, iterators and lambda expressions.
It’s no exaggeration to say that this book covers less than one tenth
of the ground of most other C# books, but gives that ground ten times
as much attention.

Organization and Style

The book is divided into three
parts:

  • Preliminaries (delegates
    and the type system)
  • Features new to
    C# 2.0
  • Features new to
    C# 3.0

I like this organization: it
presents topics in an order roughly similar to how I teach when giving
tutorials on LINQ—starting with the foundations of delegates and generics,
before moving on to iterators and higher-order functions, and then finally
LINQ. Sometimes the routes are a little circuitous and involve some
huffing and puffing, but the journey is worthwhile and helps to solidify
concepts.

C# in Depth is a tutorial that
gradually builds one concept upon another and is designed primarily
for sequential reading. The examples don’t drag on over multiple sections,
however, so you can jump in at any point (assuming you understand the
preceding topics). The examples are all fairly short, too, which is
very much in my taste. In fact, I would say Jon and I think very much
alike: when he expresses an opinion, I nearly always agree wholeheartedly.

A big trap in writing tutorials,
is assuming knowledge of topics that you teach later. This book rarely
falls victim to this. The writer is also consistent in his use of terminology—and
sticks with the C# Language Specification which I think sets a good
example to all authors. Jon is not sloppy with concepts and is careful
in his wording to avoid misinterpretation. One thing that comes through
is that Jon really understands the material deeply himself.

If I were to classify this
book as beginner/intermediate/advanced, I’d say intermediate-to-advanced.
It’s quite a bit more advanced than, say, Jesse’s tutorial “Programming
C#”.

The layout of the book is pleasing—I
particularly like the annotations alongside the code listings.

Content

In the first section, “Preparing
for the Journey,” the book does cover a few C# 1 topics, namely delegates
and C#’s type system. Jon’s handling of these topics is excellent: his
discussion of static, explicit and safe typing is clear and helpful,
as is the section on value types versus reference types. I particularly
liked the section “Dispelling Myths”—this is likely to be
of use even to experienced developers. This chapter, in fact, leaves
the reader pining for more advanced C# 1 material.

The C# 2 features are very
well covered. The section on generics includes such topics as their
handling by the JIT compiler, the subtleties of type inference, a thorough
discussion on constraints, covariance/contravariance limitations, and
comparisons with Java’s generics and C++’s templates. Nullable types
are covered similarly well, with suggested patterns of use, as are anonymous
methods and iterators.

The C# 3 features are also
handled well. I like how Jon introduces expression trees—first building
them programmatically, and then showing how the compiler provides a
shortcut via lambda expressions. The book covers query expressions and
the basics of LINQ, and includes a brief explanation of each of the
standard query operators in an appendix. There’s also a chapter called
“LINQ Beyond Collections” which briefly introduces the LINQ to SQL,
LINQ to DataSet and LINQ to XML APIs.

Throughout the book, Jon goes
to some lengths to explain not just “what”, but “why”. This
book isn’t for people who want to get in and out quick so they can
get their job done and out of the way—it’s for people who enjoy
working elegantly with their tools, through a rich understanding of
the language’s background, subtleties and nuances.

Of course, digesting all this
is a bit of work (Chapter 3’s summary opens with the word “Phew!”).
Despite this, I think Jon does a good job at explaining difficult things
well. I don’t think I’ve seen any IL listings in the book, which is
a good sign in general. I’m always wary when an author, in explaining
a C# concept, says, “to understand XYZ, we must examine the IL”.
I take issue with this: rarely, if ever, does one need to look at IL
to understand C#, and doing so creates unnecessary complication by choosing
the wrong level of abstraction. That isn’t to saying looking at IL isn’t
useful for a deeper understanding of the CLR—but only after first
teaching C# concepts independently of IL.

What’s Missing

It was in Jon’s design critieria
not build a tome—instead to write a small(ish) book that complements
rather than replaces books such as C# 3.0 in a Nutshell. Most things
missing from C# in Depth are consistent with its focus (such as the
CLR, threading, .NET Framework, etc.) The fact that C# in Depth excludes
the features of C# that were introduced prior to version 2 is a good
thing if you’re looking for a “delta” book, although, of course,
it makes it less useful as a language reference.

The book’s treatment of LINQ
centres largely on LINQ to Objects. If you’re planning on learning
C# 3.0 so that you can query databases through LINQ, the book’s focus
is not ideal, if read in isolation. I personally prefer the approach
of covering “remote” query architecture earlier and in more detail
(in conjunction with the canonical API, LINQ to SQL) – so that when
it comes time to teach query operators such as SelectMany, Group and
Join, they can be demonstrated in the context of both local and database
queries. I also strive, when writing on LINQ, to cover enough querying
ground that readers can “reproduce” their SQL queries in LINQ—even
though it means having to get sidetracked with API practicalities. Of
course, getting sidetracked with API practicalities is undesirable for
a language-centric book such as C# in Depth, and so the LINQ to Objects
focus is understandable. In any case, reading Chapters 8-10 of C# 3.0
in a Nutshell would certainly fill in the gaps. Another complementary
book would be Manning’s LINQ in Action (this book is well-reviewed
on Amazon, though I’ve not yet read it).

Summary

This book is well written,
accurate and insightful, and complements nearly every other book out
there. I would recommend it to anyone wanting a thorough “inside”
tutorial on the features new to C# 2 and 3.

Programming “in” a language vs programming “into” a language

I’m currently reading Steve McConnell’s Code Complete (for the first time – yes, I know that’s somewhat worrying) and there was one section was disturbed me a little. For those of you with a copy to hand, it’s in section 4.3, discussing the difference between programming in a language and programming into a language:

Programmers who program “in” a language limit their thoughts to constructs that the language directly supports. If the language tools are primitive, the programmer’s thoughts will also be primitive.

Programmers who program “into” a language first decide what thoughts they want to express, and then they determine how to express those thoughts using the tools provided by their specific language.

Now don’t get me wrong – I can see where he’s coming from, and the example he then provides (Visual Basic – keeping the forms simple and separating them from business logic) is fine, but he only seems to give one side of the coin. Here’s a different – and equally one-sided – way of expressing the same terms:

Programmers who program “in” a language understand that language’s conventions and idioms. They write code which integrates well with other libraries, and which can be easily understood and maintained by other developers who are familiar with the language. They benefit from tools which have been specifically designed to aid coding in the supported idioms.

Programmers who program “into” a language will use the same ideas regardless of their target language. If their style does not mesh well with the language, they will find themselves fighting against it every step of the way. It will be harder to find libraries supporting their way of working, and tools may well prove annoying. Other developers who come onto the project later and who have experience in the language but not the codebase will find it hard to navigate and may well accidentally break the code when changing it.

There is a happy medium to be achieved, clearly. You certainly shouldn’t restrict your thinking to techniques which are entirely idiomatic, but if you find yourself wanting to code in a radically different style to that encouraged by the language, consider changing language if possible!

If I were attacking the same problem in C# 1 and C# 3, I could easily end up with radically different solutions. Some data extraction using LINQ in a fairly functional way in C# 3 would probably be better solved in C# 1 by losing some of the functional goodness than by trying to back-port LINQ and then use it without the benefit of lambda expressions or even anonymous methods.

Accents and Conventions

That’s just between different versions of the same language. Between different actual languages, it can get much worse. If you’ve ever seen Java code written in a C++ style or vice versa, you’ll know what I mean. I’ve previously referred to this in terms of speaking a language with an accent – you can speak C# with a Java accent just as you can speak French with an English accent. Neither is pleasant.

At the lowest level, this is likely to be about conventions – and I’m pretty sure that when Steve writes “Invent your own coding conventions, standards, class libraries, and other augmentations” he doesn’t actually mean us to do it in a gratuitous fashion. It can be worth deviating from the “platform favoured” conventions sometimes, particularly if those differences are invisible to clients, but it should always be done with careful consideration. In a Java project I worked on a few years ago, we took the .NET naming conventions for interfaces (an I prefix) and constants (CamelCasing instead of SHOUTY_CAPS). Both of these made the codebase feel slightly odd, particularly where Java constants were used near our constants – but I personally found the benefits to be worth the costs. Importantly, the whole team discussed it before making any decisions.

Design Patterns

At a slightly higher level, many design patterns are just supported much, much better by some languages than others. The iterator pattern is a classic example. Compare the support for it from Java 6 and C# 2. On the “client” side, both languages have specific syntax: the enhanced for loop in Java and the foreach loop in C#. However, there is one important difference: if the iterator returned by GetEnumerator implements IDisposable (which the generic form demands, in fact) C# will call Dispose at the end of the loop, no matter how that occurs (reaching the end of the sequence, breaking early, an exception being thrown, etc). Java has no equivalent of this. Imagine that you want to write a class to iterate over the lines in a file. In Java, there’s just no safe way of representing it: you can make your iterator implement Closeable but then callers can’t (safely) use the enhanced for loop. You can make your code close the file handle when it reaches the end, but there’s no guarantee that will happen.

Then consider the “server” side of the iterator – the code actually providing the data. Java is like C# 1 – there’s no specific support for implementing an iterator. In C# 2 and above, iterator blocks (i.e. methods with yield statements) make life much, much easier. Writing iterators by hand can be a real pain. Reading a file line by line isn’t too bad, leaving aside the resource lifetime issue – but the complexity can balloon very quickly. Off by one errors are really easy to introduce.

So, if I were tackling a project which required reading text files line by line in various places, what would I do? In Java, I would take the reasonably small hit of a while loop in each place I needed it. In C# I’d write a LineReader class (if I didn’t already have one!) and use a more readable foreach loop. The contortions involved in introducing that idea into Java just wouldn’t be worth the effort.

At a much higher level, we get into whole programming styles and paradigms. If your natural inclination is to write imperative code, you’re likely to create a mess (or get very frustrated) in a functional language. If the problem really does call for a functional language, find someone else to help you think in a more functional way. If the problem suits imperative programming just as well as it does functional programming, see if you can change the environment to something more familiar.

Conclusion

I’m not suggesting that Steve’s point isn’t valid – but he’s done his readers a disservice by only presenting one side of the matter. Fortunately, the rest of the book (so far) is excellent and humbling – to such a degree that this minor quibble stuck out like a sore thumb. In a book which had more problems, I would probably barely have even noticed this one.

There’s another possibility, of course – I could be competely wrong; maybe I’ve been approaching problems from a restrictive viewpoint all this time. How about you?