Book review: Accelerated C# 2008 by Trey Nash

Time for another book review, and this time it’s a due to a recommendation from a reader who has this one, C# in Depth and Head First C#.

Resources

Introduction and disclaimer

My normal book review disclaimer applies, but probably more so than ever before. Yes, Accelerated C# 2008 is a competitor to C# in Depth. They’re different in many ways, but many people would no doubt be in the target audience for both books. If you meet that criterion, please be aware that as the author of C# in Depth I can’t possibly be 100% objective when reviewing another C# book. That said, I’ll try to justify my opinions everywhere I can.

Target audience and content overview

Accelerated C# 2008 is designed to appeal to existing developers with experience in an OO language. As one of the Amazon reviews notes, you may struggle somewhat if you don’t have any .NET experience beforehand – while it should be possible to read it knowing only Java or C++, there are various times where a certain base level of knowledge is assumed and you’ll want to refer to MSDN for some background material. If you come at the book with no OO experience at all, I expect you’ll have a hard time. Chapter 4 does cover the basics of OO in .NET (classes, structs, methods, properties etc) this isn’t really a beginner’s book.

In terms of actual content covered, Accelerated C# 2008 falls somewhere between C# in Depth (almost purely language) and C# 3.0 in a Nutshell (language and then core libraries). It doesn’t attempt to cover all the core technologies (IO, reflection, security, interop etc are absent) but it goes into detail beyond the C# language when it comes to strings, exceptions, collections, threading and more. As well as purely factual information, there’s a lot of guidance as well, including a whole chapter entitled “In Search of C# Canonical Forms.”

General impressions

I’d like to make it clear to start with that I like the book. I have a number of criticisms, none of which I’m making up for the sake of being critical – but that in no way means it’s a bad book at all. It’s very unlikely that you know everything in here (I certainly didn’t) and the majority of the guidance is sound. The code examples are almost always self-contained (a big plus in my view) and Trey’s style is very readable. Where there are inaccuracies, they’re usually pretty harmless, and the large amount of accurate and insightful material makes up for them.

Just as I often compare Java to C# in my book, so Trey often compares C++ to C# in his. While my balance of C# to C++ knowledge is such that these comments aren’t particularly useful to me, I can see them being good for a newcomer to C# from a C++ background. I thought there might have been a few too many comparisons (I understood the point about STL and lambdas/LINQ the first time round…) but that’s just a minor niggle.

Where C# in Depth is primarily a “read from start to finish” book and C# 3.0 in a Nutshell is primarily a reference book (both can be used the other way, of course) Accelerated C# 2008 falls between the two. It actually achieves the best of both worlds to a large extent, which is an impressive feat. The ordering could be improved (more on this later on) but the general feeling is very good.

One quick word about the size of the book in terms of content: if you’re one of those people who judges the amount of useful content in a book on its page count, it’s worth noting that the font in this book is pretty small. I would guess that it packs about 25% more text per page than C# in Depth does, taking its “effective” page count from around 500 to 625. Also, the content is certainly meaty – you’re unlikely to find yourself skimming over loads of simple stuff trying to get to the good bits. Speaking of “getting to the good bits” let’s tackle my first significant gripe.

Material organisation

If you look at the tables of contents for Accelerated C# 2008 and Accelerated C# 2005, you’ll notice that the exact same chapter titles in the 2005 edition carry over in the same order in the 2008 edition. There are three extra chapters in the new edition, covering extension methods, lambda expressions and LINQ. That’s not to say that the content of the “duplicate” chapters is the same as before – C# 3.0 features are introduced in the appropriate place within existing chapters. In terms of ordering the chapters, I think it would be have been much more appropriate to keep the last chapter of the old edition – “In Search of C# Canonical Forms” – as the last chapter of the new edition. Apart from anything else, that would allow it to include hints and tips involving the new C# 3 features which are currently covered later. It really feels like a “wrapping up” chapter, and deserves to be last.

That’s not the only time that the ordering felt strange, however. Advanced topics (at least ones which feel advanced to me) are mixed in with fairly basic ones. For instance, in the chapter on exceptions, there’s a section about “exception neutrality” which includes details about constrained execution regions and critical finalizers. All interesting stuff – even though I wish there were more of a prominent warning saying, “This is costly to both performance and readability: only go to these lengths when you really, really need to.” However, this comes before a section about using try/finally blocks and the using statement to make sure that resources are cleaned up however a block is exited. I can’t imagine anyone who knows enough C# to take in the exception neutrality material also not knowing about try/finally or the using statement (or how to create your own custom exception class, which comes between these two topics).

Likewise the chapter which deals with collections, including generic ones, comes before the chapter on generics. If I were a reader who didn’t know generics already, I think I’d get very confused reading about ICollection<T> without knowing what the T meant. Now don’t get me wrong: ordering material so that you don’t get “circular references” is often hard if not impossible. I just think it could have been done better here.

Aiming too deep?

It’s not like me to criticise a book for being too deep, but I’m going to make an exception here. Every so often, I came away from a topic thinking that it would have been better covered a little bit more lightly. Sometimes this was because a running example became laborious and moved a long way from anything you were actually likely to want to do in real life. The sections on “borrowing from functional programming” and memoization/currying/anonymous recursion felt guilty of this. It’s not that they’re not interesting topics, but the examples picked didn’t quite work for me.

The other problem with going deep is that you really, really need to get things right – because your readers are less likely to spot the mistakes. I’ll give three examples here:

  • Trey works hard on a number of occasions to avoid boxing, and points it out each time. Without any experience in performance tuning, you’d be forgiven for thinking that boxing is the primary cause of poor performance in .NET applications based on this book. While I agree that it’s something to be avoided where it’s possible to do so without bending the design out of shape, it doesn’t deserve to be laboured as much as it is here. In particular, Trey gives an example of a complex number struct and how he’s written appropriate overloads etc to avoid boxing. Unfortunately, to calculate the magnitude of the complex number (used to implement IComparable in a manner which violates the contract, but that’s another matter) he uses Math.Pow(real, 2) + Math.Pow(img, 2). Using a quick and dirty benchmark, I found that using real * real + img * img instead of Math.Pow made far, far more difference than whether or not the struct was boxed. (I happen to think it’s more readable code too, but never mind.) There was nothing wrong with avoiding the boxing, but in chasing the small performance gains, the big ones were missed.

  • In the chapter on threading, there are some demonstrations of lock-free programming (before describing locking, somewhat oddly – and without describing the volatile modifier). Now, personally I’d want to discourage people from attempting lock-free programming at all unless they’ve got a really good reason (with evidence!) to support that decision – but if you’re going to do it at all, you need to be hugely careful. One of the examples basically has a load of threads starting and stopping, updating a counter (correctly) using Interlocked.Increment/Decrement. Another thread monitors the count and periodically reports it – but unfortunately it uses this statement to do it:

    threadCount = Interlocked.Exchange(ref numberThreads, numberThreads);

    The explanation states: “Since the Interlocked class doesn’t provide a method to simply read an Int32 value in an atomic operation, all I’m doing is swapping the numberThreads variable’s value with its own value, and, as a side effect, the Interlocked.Exchange method returns to me the value that was in the slot.” Well, not quite. It’s actually swapping the numberThreads variable’s value with a value evaluated at some point before the method call. If you rewrite the code like this, it becomes more obviously wrong:

    int tmp = numberThreads;
    Thread.Sleep(1000); // What could possibly happen during this time, I wonder?
    threadCount = Interlocked.Exchange(ref numberThreads, tmp);

    The call to Thread.Sleep is there to make it clear that numberThreads can very easily change between the initial read and the call to Interlocked.Exchange. The correct fix to the code is to use something like this:

    threadCount = Interlocked.CompareExchange(ref numberThreads, 0, 0);

    That sets numberThreads atomically to the value 0 if (and only if) its value is already 0 – in other words, it will never actually change the value, just report it. Now, I’ve laboured the explanation of why the code is wrong because it’s fairly subtle. Obvious errors in books are relatively harmless – subtle ones are much more worrying.

  • As a final example for this section, let’s look at iterator blocks. Did you know that any parameters passed to methods implemented using iterator blocks become public fields in the generated class? I certainly didn’t. Trey pointed out that this meant they could easily be changed with reflection, and that could be dangerous. (After looking with reflector, it appears that local variables within the iterator block are also turned into public fields.) Now, leaving aside the fact that this is hugely unlikely to actually bite anyone (I’d be frankly amazed to see it as a problem in the wild) the suggested fix is very odd.

    The example Trey gives is where originally a Boolean parameter is passed into the method, and used in two places. Oh no! The value of the field can be changed between those two uses, which could lead to problems! True. The supposed fix is to wrap the Boolean value in an immutable struct ImmutableBool, and pass that in instead. Now, why would that be any better? Certainly you can’t change the value within the struct – but you can easily change the field‘s value to be a completely different instance of ImmutableBool. Indeed, the breakage would involve exactly the same code, just changing the type of the value. The other train of thought which suggests that this approach would fail is that bool is already immutable, so it can’t be the mutability of the type of the field that causes problems. I’m sure there are much more useful things that Trey could have said in the two and a half pages he spent describing a broken fix to an unimportant problem.

Sorry, that was getting ranty for a bit… but I hope you understand why. Before concluding this review, let’s look at one chapter which is somewhat different to the rest, and which I’ve mentioned before:

In Search of C# Canonical Forms (aka “Design and Implementation Guidelines” :)

I’d been looking forward to this part of the book. I’m always interested in seeing what other people think the most important aspects of class design are. The book doesn’t go into much detail about abstract orientation (in this chapter, anyway – there’s plenty scattered through the book) but concentrates on core interfaces you might implement, etc. That’s fine. I’m still waiting for a C# book to be written to truly be on a par with Effective Java (I have the second edition waiting to be read at work…) but I wasn’t expecting it all to be here. So, was this chapter worth the wait?

Somewhat. I was very glad to see that the first point around reference types was “Default to sealed classes” – I couldn’t agree more, and the arguments were well articulated. Many other guidelines were either entirely reasonable or at least I could go either way on. There were a few where I either disagreed or at least would have put things differently:

  • Implementing cloning with copy constructors: one point about cloning which wasn’t mentioned is that (to quote MSDN) “The resulting clone must be of the same type as or a compatible type to the original instance.” The suggested implementation of Clone in the book is to use copy constructors. This means that every subclass must override Clone to call its own copy constructor, otherwise the instance returned will be of the wrong type. MemberwiseClone always creates an instance of the same type. Yes, it means the constructor isn’t called – but frankly the example given (performing a database lookup in the constructor) is a pretty dodgy cloning scenario in the first place, in my view. If I create a clone and it doesn’t contain the same data as the original, there’s something wrong. Having said that, the caveats Trey gives around MemberwiseClone are all valid in and of themselves – we just disagree about their importance. The advice to not actually implement ICloneable in the first place is also present (and well explained).
  • Implementing IDisposable: Okay, so this is a tough topic, but I was slightly disappointed to see the recommendation that “it’s wise for any objects that implement the IDisposable interface to also implement a finalizer […]” Now admittedly on the same page there’s the statement that “In reality, it’s rare that you’ll ever need to write a finalizer” but the contradiction isn’t adequately resolved. A lot of people have trouble understanding this topic, so it would have been nice to see really crisp advice here. My 20 second version of it is: “Only implement a finalizer if you’re holding on to resources which won’t be cleaned up by their own finalizers.” That actually cuts out almost everything, unless you’ve got an IntPtr to a native handle (in which case, use SafeHandle instead).
    • As a side note, Trey repeatedly claims that “finalizers aren’t destructors” which irks me somewhat as the C# spec (the MS version, anyway) uses the word “destructor” exclusively – a destructor is the way you implement a .NET finalizer in C#. It would be fine to say “destructors in C# aren’t deterministic, unlike destructors in C++” but I think it’s worth acknowledging that the word has a valid meaning in the context of C#. Anyway…
  • Implementing equality comparisons: while this was largely okay, I was disappointed to see that there wasn’t much discussion of inheritance and how it breaks equality comparisons in a hard-to-fix way. There’s some mention of inheritance, but it doesn’t tackle the issue I think is thorniest: If I’m asking one square whether it’s equal to another square, is it enough to just check for everything I know about squares (e.g. size and position)? What about if one of the squares is actually a coloured square – it has more information than a “basic” square. It’s very easy to end up with implementations which break reflexivity, simply because the question isn’t well-defined. You effectively need to be asking “are these two objects equal in <this> particular aspect” – but you don’t get to specify the aspect. This is an example where I remember Effective Java (first edition) giving a really thorough explanation of the pitfalls and potential implementations. The coverage in Accelerated C# 2008 is far from bad – it just doesn’t meet the gold standard. Arguably it’s unfair to ask another book to compete at that level, when it’s trying to do so much else as well.
  • Ordering: I mentioned earlier on that the complex number class used for a boxing example failed to implement comparisons appropriately. Unfortunately it’s used as the example specifically for “how to implement IComparable and IComparable<T>” as well. To avoid going into too much detail, if you have two instances x and y such that x != y but x.Magnitude == y.Magnitude, you’ll find x.CompareTo(y) == y.CompareTo(x) (but with a non-zero result in both cases). What’s needed here is a completely different example – one with a more obvious ordering.
  • Value types and immutability: Okay, so the last bullet on the value types checklist is “Should this struct be immutable? […] Values are excellent candidates to be immutable types” but this comes after “Need to boxed instances of value? Implement an interface to do so […]” No! Just say no to mutable value types to start with! Mutable value types are bad, bad, bad, and should be avoided like the plague. There are a very few situations where it may be appropriate, but to my mind any advice checklist for implementing structs should make two basic points:
    • Are you sure you really wanted a struct in the first place? (They’re rarely the right choice.)
    • Please make it immutable! Pretty please with a cherry on top? Every time a struct is mutated, a cute kitten dies. Do you really want to be responsible for that?

Conclusion

At the risk – nay, certainty – of repeating myself, I’m going to say that I like the book despite the (sometimes subjective) flaws pointed out above. As Shakespeare wrote in Julius Caesar, “The evil men do lives after them. The good is oft interred with their bones.” So it is with book reviews – it’s a lot easier to give specific examples of problems than it is to report successes – but the book does succeed, for the most part. Perhaps the root of almost all my reservations is that it tries to do too much – I’m not sure whether it’s possible to go into that much detail and cater for those with little or no previous C# experience (even with Java/C++) and keep to a relatively slim volume. It was a very lofty goal, and Trey has done very well to accomplish what he has. I would be interested to read a book by him (and hey, potentially even collaborate on it) which is solely on well-designed classes and libraries.

In short, I recommend Accelerated C# 2008, with a few reservations. Hopefully you can judge for yourself whether my reservations would bother you or not. I think overall I slightly prefer C# 3.0 in a Nutshell, but the two books are fairly different.

Reaction

I sent this to Trey before publishing it, as is my custom. He responded to all my points extremely graciously. I’m not sure yet whether I can post the responses themselves – stay tuned for the possibility, at least. My one problem with reviewing books is that I end up in contact with so many other authors who I’d like to work with some day, and that number has just increased again…

20 thoughts on “Book review: Accelerated C# 2008 by Trey Nash”

  1. Essential C# 3.0 (http://www.amazon.co.uk/Essential-3-0-Framework-Microsoft-Development/dp/0321533925/ref=sr_1_1?ie=UTF8&s=books&qid=1217692061&sr=8-1) should be out soon, the C# 2.0 edition I found was pretty good.

    Not sure how much of an update it will be though. The C# 2.0 book that Mark did was my fave on the language, covered most topics well.

    When it comes to the C# 3.0 books though I’m a little hesitant to shell out because most of the stuff that has been added are just higher level abstractions. I’m not sure if I am alone in my view but when .NET 2.0 came out everybody wanted to read up on generics and how the run time actually implemented them along with some other new stuff. C# 3.0 to me seems like a relatively small investement to understand the new stuff.

    You have probably read it, but my fave .NET book is CLR Via C#. I’ve had my copy for a long time now and every now and then I will read it again.

    Like

  2. Yup, I liked Essential C# 2.0 as well, although I don’t think I’ve read it cover to cover. I’ll add the 3.0 version to my list of things to review at some point. I’ll also *have* to add “CLR via C#” – I’ve heard so many good things about that, but haven’t actually read it, sadly enough.

    As for buying a book about C# 3 – it really depends on how much detail you want to know, and whether you mind hunting around for it. These days practically anything can really be found online – the benefit of books is that they try to structure your learning into a sensible order, whilst still giving more than a single page tutorial.

    Of course, I’m somewhat biased…

    Like

  3. I can’t believe you haven’t read C++ Via CLR – I had that pegged down as a certainty!

    My preference now is really to read about the nuts and bolts of things, how they work rather than focusing purely on the syntax. I reckon I can figure that stuff out alone.

    I’m not sure if you are a pure .NET guy, but there are another few books in the dev space that I have labelled “legendary” these include: Windows Internals (new edition out soon on Vista and Win Server 2008), and Windows Via C/C++. Both excellent, and very informative.

    I’m still in shock about the CLR Via C# book, if there was one book I would tell people to read for .NET dev it would be that.

    Like

  4. I know, everyone says I should read it – and I will, sometime, I promise :)

    I’m mostly a pure .NET guy – I’m mostly blissfully ignorant of Windows internals, aside from the threading stuff I’ve read in Joe Duffy’s book.

    Don’t forget I’ve got Java (and all the associated technologies) to keep up with too…

    Jon

    Like

  5. Allow me to defend the discussion on Critical Execution Regions (CER) if I may.

    I know it may seem like a deep topic, however, the reason it is mentioned in the chapter on exceptions is because it is entirely germane and required in order to create bullet proof exception neutral code.

    The problem is, in order to create exception neutral code, we must be able to have a fundamental set of operations that are guaranteed never to throw. That is so we can have a sequence of code that is guaranteed never to throw. I dig into the reasons why in my chapter on exceptions. However, in the CLR, that is impossible without CERs because certain exceptions (such as thread abort exceptions) may be delivered asynchronously, that is, the executing code does not even trigger them. Thus, we need a mechanism to be able to turn those off temporarily.

    After all, this is exactly why CERs were introduced. The reality is, especially when it comes to classes that manage unmanaged resources, CERs are essential to allowing one to implement a sort of RAII idiom in order to clean up resources reliably.

    Hope this helps explain my intentions with going so deep on such topics. ;-)

    -Trey

    Like

  6. Trey: I was okay with going deep on the topic (I certainly learned stuff there) – it was more the ordering that bothered me for that point. But your comments are very welcome, and it’s good for readers to get a balanced set of views instead of just mine :)

    Jon

    Like

  7. I just wondering how Microsoft can miss hire someone like you. They have to invest in experts like you. Any way thanks for very useful information you share with us. keep going

    Like

  8. I hope I can represent an average reader here (I’m in .NET / C# for less than a year, but I have some limited C++ background, read Stroustrup, etc.)
    “Accelerated C# 2008” is my first C# book and I like it very much. It’s not an easy reading, and I’m still reading it. I’m periodically switching to “C# 3.0 In a Nutshell” and some other books and articles to clarify some stuff. Your book, Jon, is waiting its turn as well. It was very interesting to read your comments and discussion here. I hope to be able to ask Trey some questions in his blog too. (It would be nice to have a more active discussion there.) I think, it’s the best way to master a new language.

    “Essential C# 3.0” was mentioned above as another good C# / design book. Thank you, but what about forthcoming new edition of “Framework Design Guidelines” by Krzysztof Cwalina and Brad Abrams?

    Jon, you said you “got Java (and all the associated technologies) to keep up”. I wonder if a new Scala language is among those associated technologies? I became interested in it after I read a remark by Bruce Eckel saying that it might be a next BIG language, gradually pushing Java out… Now I’m deeply in learning it, it’s simply exciting.
    http://www.scala-lang.org/index.html
    http://www.scala-lang.org/docu/tutorials.html
    http://www.artima.com/shop/programming_in_scala

    Like

  9. Hey! I’m the reader who recommended this to you (or, actually, asked you to write a review). Thanks for doing this — and sorry it took me so long to comment. I’ve been busy, and just noticed this in the Headlines on the Visual Studio start page, of all places.

    Anyway, this was a great review, and a great resource. As I mentioned before, I read this (well, actually I’m still reading it) as a companion to the far lower level Head First C#. I’ve found it to be an excellent way to get further coverage of topics after HFC# introduces them, and I’ve been learning C# from the ground up using both books.

    One thing I think should have been emphasized a bit more in your review of this book is the level of readability and approachability. I’m not sure it makes sense to compare this to the Nutshell book (which I’ve also been reading lately) — Nutshell is absolutely a reference text, whereas the Accelerated book can be read straight through. I think it’s clear from the criticisms in your review that the Accelerated book goes into some real depth on many topics, but what’s not so clear is that it builds to that depth (er . . . digs to that depth?) from essentially nothing. The writing is also remarkably good, and manages to be engaging without the conversational informality of the Head First book. It’s been my bedtime reading for the last couple of months, and yet I can read it without falling asleep or having to take notes — which is a feat, ladies and gentlemen.

    On the other hand, as a beginner I don’t feel qualified to fully analyze a lot of the deeper points, which is why I really appreciate in-depth reviews like yours. It’s not so much that I’m going to 100% agree with your criticisms, but the things you’ve pointed out as issues now seem to me more controversial, and I’ll take care to look into them in further depth. With your Head First review, I mostly used Accelerated C# 2008 to get more detail and alternate viewpoints on the problem areas you highlighted; with this book, I’ll probably use yours and the Nutshell book.

    So — thank you. A great and extremely useful review of a book that I’ve learned a lot from and really enjoyed.

    Like

  10. Hi everybody from Russia!!! Rescently I was buy this book and red it. It is wonderful book! I’m just started programming width C# and book by Trey Nash helped me so much! Thank you, Tray, for you work.

    Like

  11. Nice Review.
    Regarding the Interlocked “Read” chapter, I was always under the impression that if you only use the Interlocked family of functions to modify a 32 bit number, all simple reads on that variable are guaranteed to be consistent.
    Has this changed?

    Like

  12. @Guyon: Not sure about that, but even if the read *was* consistent, it happens before the method call – so any changes made after the read but before entering Interlocked.Exchange would effectively be lost.

    Like

  13. @Vladimir: Yes, Scala is one of the things I’d like to learn about. Along with Python, F#, Erlang, Boo and Smalltalk. Hmm. I can’t see all of that happening any time soon…

    Jon

    Like

  14. Jon, don’t scare me. If you, such a well-known programmer have trouble keeping up with industry, who would ordinary programmers survive? :)
    BTW, to prove a point in your analysis of “Accelerated C#”, I partially skipped a section about constrained execution regions and critical finalizers. It was too special.

    Like

  15. Sorry for posting it here, but I think this stuff is quite important for a novice C# programmer to understand, while Trey’s blog (http://www.treynash.net/2007/12/accelerated_c_2008_now_availab.html) does not allow to post messages and posting errata at http://www.apress.com/book/errata/721 seems to go nowhere…

    On a page 262 of “Accelerated C# 2008” it is said, that “an event is a shortcut that saves you the time of having to write the register and unregister methods that manage a delegate chain yourself”. As I understand, events has rather opposite role, As authors of “C# 3.0 in a Nutshell” book said on a page 112, “The main purpose of events is to prevent subscribers from interfering with each other.”
    Please see details at http://pro-thoughts.blogspot.com/2008/08/delegates-and-events-what-does-event.html

    Like

  16. Nice effort at a review. I’d like to offer “my view” about your writing style oin this article: Avoid all the hoo-hawing. Everyone knows that this represents your personal view, that it is necessarily biased and that you don’t speak for the whole world. So much emotional baggage makes for a tiring read.

    To quote William – “The lady [gentleman, in this case] doth protest too much, methinks”

    Like

  17. @Jink: I do take your point. I’ll try to tone it down for future posts (although there’s quite a lot in the review I posted tonight, too). Maybe I should make do with a really strong disclaimer right at the top. I just want to avoid any hint of actually trying to disturb the market for my own gain with these reviews.

    Jon

    Like

  18. Hi Jon!

    re. finalizers != destructors

    I believe that the term ‘destructor’ was changed to ‘finalizer’ in the C# 2 spec for the very reason that it’s deeply misleading. I also seem to remember that using the tilde syntax was an acknowledged design mistake in C#.

    Like

Leave a comment