All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see https://toggl.com/programming-princess for the full original

Programming “in” a language vs programming “into” a language

I’m currently reading Steve McConnell’s Code Complete (for the first time – yes, I know that’s somewhat worrying) and there was one section was disturbed me a little. For those of you with a copy to hand, it’s in section 4.3, discussing the difference between programming in a language and programming into a language:

Programmers who program “in” a language limit their thoughts to constructs that the language directly supports. If the language tools are primitive, the programmer’s thoughts will also be primitive.

Programmers who program “into” a language first decide what thoughts they want to express, and then they determine how to express those thoughts using the tools provided by their specific language.

Now don’t get me wrong – I can see where he’s coming from, and the example he then provides (Visual Basic – keeping the forms simple and separating them from business logic) is fine, but he only seems to give one side of the coin. Here’s a different – and equally one-sided – way of expressing the same terms:

Programmers who program “in” a language understand that language’s conventions and idioms. They write code which integrates well with other libraries, and which can be easily understood and maintained by other developers who are familiar with the language. They benefit from tools which have been specifically designed to aid coding in the supported idioms.

Programmers who program “into” a language will use the same ideas regardless of their target language. If their style does not mesh well with the language, they will find themselves fighting against it every step of the way. It will be harder to find libraries supporting their way of working, and tools may well prove annoying. Other developers who come onto the project later and who have experience in the language but not the codebase will find it hard to navigate and may well accidentally break the code when changing it.

There is a happy medium to be achieved, clearly. You certainly shouldn’t restrict your thinking to techniques which are entirely idiomatic, but if you find yourself wanting to code in a radically different style to that encouraged by the language, consider changing language if possible!

If I were attacking the same problem in C# 1 and C# 3, I could easily end up with radically different solutions. Some data extraction using LINQ in a fairly functional way in C# 3 would probably be better solved in C# 1 by losing some of the functional goodness than by trying to back-port LINQ and then use it without the benefit of lambda expressions or even anonymous methods.

Accents and Conventions

That’s just between different versions of the same language. Between different actual languages, it can get much worse. If you’ve ever seen Java code written in a C++ style or vice versa, you’ll know what I mean. I’ve previously referred to this in terms of speaking a language with an accent – you can speak C# with a Java accent just as you can speak French with an English accent. Neither is pleasant.

At the lowest level, this is likely to be about conventions – and I’m pretty sure that when Steve writes “Invent your own coding conventions, standards, class libraries, and other augmentations” he doesn’t actually mean us to do it in a gratuitous fashion. It can be worth deviating from the “platform favoured” conventions sometimes, particularly if those differences are invisible to clients, but it should always be done with careful consideration. In a Java project I worked on a few years ago, we took the .NET naming conventions for interfaces (an I prefix) and constants (CamelCasing instead of SHOUTY_CAPS). Both of these made the codebase feel slightly odd, particularly where Java constants were used near our constants – but I personally found the benefits to be worth the costs. Importantly, the whole team discussed it before making any decisions.

Design Patterns

At a slightly higher level, many design patterns are just supported much, much better by some languages than others. The iterator pattern is a classic example. Compare the support for it from Java 6 and C# 2. On the “client” side, both languages have specific syntax: the enhanced for loop in Java and the foreach loop in C#. However, there is one important difference: if the iterator returned by GetEnumerator implements IDisposable (which the generic form demands, in fact) C# will call Dispose at the end of the loop, no matter how that occurs (reaching the end of the sequence, breaking early, an exception being thrown, etc). Java has no equivalent of this. Imagine that you want to write a class to iterate over the lines in a file. In Java, there’s just no safe way of representing it: you can make your iterator implement Closeable but then callers can’t (safely) use the enhanced for loop. You can make your code close the file handle when it reaches the end, but there’s no guarantee that will happen.

Then consider the “server” side of the iterator – the code actually providing the data. Java is like C# 1 – there’s no specific support for implementing an iterator. In C# 2 and above, iterator blocks (i.e. methods with yield statements) make life much, much easier. Writing iterators by hand can be a real pain. Reading a file line by line isn’t too bad, leaving aside the resource lifetime issue – but the complexity can balloon very quickly. Off by one errors are really easy to introduce.

So, if I were tackling a project which required reading text files line by line in various places, what would I do? In Java, I would take the reasonably small hit of a while loop in each place I needed it. In C# I’d write a LineReader class (if I didn’t already have one!) and use a more readable foreach loop. The contortions involved in introducing that idea into Java just wouldn’t be worth the effort.

At a much higher level, we get into whole programming styles and paradigms. If your natural inclination is to write imperative code, you’re likely to create a mess (or get very frustrated) in a functional language. If the problem really does call for a functional language, find someone else to help you think in a more functional way. If the problem suits imperative programming just as well as it does functional programming, see if you can change the environment to something more familiar.

Conclusion

I’m not suggesting that Steve’s point isn’t valid – but he’s done his readers a disservice by only presenting one side of the matter. Fortunately, the rest of the book (so far) is excellent and humbling – to such a degree that this minor quibble stuck out like a sore thumb. In a book which had more problems, I would probably barely have even noticed this one.

There’s another possibility, of course – I could be competely wrong; maybe I’ve been approaching problems from a restrictive viewpoint all this time. How about you?

Book review: User Interface Design for Programmers (Joel Spolsky)

Resources

Introduction

This will be a brief review. It’s a short book, after all – a mere 134 pages for the main body of the book. That’s not a bad thing at all, mind – quite the opposite. This book is a quick read, and it’s easy to read the whole thing without skimming over anything.

I’m not good at UI design, either on the web or for rich client apps. My applications tend to be functional but not pretty – and even in “functional” terms I strongly suspect users don’t always find my choices intuitive. Fortunately, so far in my professional career I haven’t actually had to work on many front ends – but should that time come, this book will prove a good starting point for me.

The Good

If you want to know all the latest and greatest technical tricks for user interfaces, this isn’t the book for you. If you’ve taken a course in usability or HCI, this probably isn’t the book for you. It really is just introductory material – but it certainly seems to be a good way of starting to think in the right way when designing user interfaces. It tries to educate in terms of an approach to take rather than the details of what to do.

There are 18 bitesize chapters, almost all of which contain many illustrations and screenshots. The production quality of the book is fabulous – it’s full colour on glossy paper, and the screenshots are all large enough to easily illustrate the point being made. It must have cost APress a fortune to print, but the results are well worth it, in my view. It’s not often that production is of such a high quality as to really make an impression (for me, anyway) but it really does grab attention. Oh, and I only spotted a single typo in the single book. I didn’t find technical errors at all, but then the material isn’t really directly technical to start with – and I’m not an expert on the topic.

The tone is conversational throughout, which may not be to everyone’s taste but is absolutely fine by me. It’s lighthearted without ever being patronising, and it’s clear that Joel is deadly serious about the subject matter itself. There are lots of examples of what works and what doesn’t – but always with clear, purposeful commentary rather than just as a UI hall of shame.

Joel describes how best to consider users, and how they’re likely to think. He focuses on the limitations of the user, in terms of how they’re unlikely to read documentation or even on-screen instructions; how requiring users to aim a mouse accurately is basically cruel; how the user’s model of your program probably isn’t the same as yours – and more. None of this is actually belittling towards the user – it’s just a case of different perspectives. He makes it crystal clear that the user has launched an application to accomplish a task, rather than for the joy of the application itself – so it’s understandable that if the user can’t accomplish their task simply (without excessive training), it’s generally the application’s fault rather than the user’s.

Although that summarises the basic message of the book, it doesn’t do justice to the close attention paid to exactly how those ideas manifest themselves in real world situations. I’m afraid it’s one of those books where just talking about which areas are covered doesn’t really give much of a feeling of the book itself. However, the writing is very similar to Joel’s blog posts – so if you enjoy those, I’d say you’re likely to enjoy the book. (I don’t agree with all of what Joel says on his blog, I should say – but whether or not I agree with his point, his writing style generally appeals to me.) 

The Bad

Okay, it’s not perfect – but my main gripe isn’t really the book’s fault. It was written in 2001, and has certainly dated when it comes to web access. I would love to see a second edition written right now, touching on the pros and cons (from a user perspective instead of a technical one) of more modern web applications. One topic which should have had more attention paid to it even back in 2001 is accessibility

Finally, I do wish Joel would be slightly kinder towards the programmers behind the interfaces he disparages. This may seem a little odd coming from someone who has made a point of being brutally honest when it comes to low opinions of books, products, articles etc – but occasionally this book goes too far, in my view. Negativity can form constructive criticism, and when it’s directed at the user interfaces themselves, everything’s fine – but when it’s aimed at the developers it just doesn’t feel right.

Conclusion

If you’re in the target market for the book, it really is excellent. Just don’t get it expecting technical tricks or detailed discussion about each individual control.

Critical dead-weight

We’re all familiar with the idea of a technology achieving critical mass: having enough users (subscribers, customers, whatever the appropriate metric might be) to keep it alive and useful. This morning I was considering the idea of critical dead-weight: having enough users etc to keep the technology alive long past its natural lifetime.

Examples of technologies we might like to kill

  • SMTP: I suspect that completely preventing spam and other abuses (while maintaining a lot of the benefits we currently enjoy) would be difficult even with a modern protocol design, but the considerations on a messaging system created today would be completely different to those used to conceive SMTP.
  • NNTP: I still prefer using a dedicated newsreader for newsgroups instead of the kind of web forum which seems fairly pervasive these days. The simple support for offline reading and deferred posting, the natural threading (including in the face of missing articles) and the nature of purpose-built applications all appeal to me. However, like SMTP there are various concerns which just weren’t considered in the original design.
  • HTML: In some ways HTML itself isn’t the biggest problem I see here (although making the markup language itself stricter to start with might have helped) – it’s the fact that browsers have always supported broken HTML. There are numbers which are often produced during discussions of browser (and particularly renderer) implementations to say just what proportion of browser code is dedicated to displaying invalid HTML in a reasonably pleasant way. I don’t recall the exact figures, and I suspect many are pulled out of thin air, but it’s a problem nonetheless. Folks who know more about the world of content itself are in a better position to comment on the core usefulness of HTML.
  • HTTP: Okay, this one is slightly tenuous. There are definitely bits of HTTP which could have been more sensibly defined (I seem to recall that the character encoding used when handling encoded bits of URL such as %2F etc is poorly specified, for example) but there are bigger issues at stake. The main thing is to consider whether the “single request, single response” model is really the most appropriate one for the modern web. It makes life more scalable in many ways, but even so it has various downsides. 
  • IPv4: This is one area where we already have a successor: IPv6. However, we’ve seen that the transition to IPv6 is happening at a snail’s pace, and there is already much criticism of this new standard, even before most of us have got there. I don’t profess to understand the details of the debate, but I can see why there is concern about the speed of change.
  • Old APIs (in Windows, Java etc): I personally feel that many vocal critics of Windows don’t take the time to appreciate how hard it is to maintain backwards compatibility to the level that Microsoft manages. This is not to say they do a perfect job, but I understand it’s a pretty nightmarish task to design a new OS when you’re so constrained by history. (I’ve read rumours that Windows 7 will tackle backward compatibility in a very different way, meaning that to run fully natively vendors will have to recompile. I guess this is similar to how Apple managed OS X, but I don’t know any details or even whether the rumours are accurate.) Similarly Java has hundreds or thousands of deprecated methods now – and .NET has plenty, too. At least there is a road towards planned obsolescence on both platforms, but it takes a long time to reach fruition. (How many of the deprecated Java APIs have actually been removed?)
  • Crufty bits of programming languages: Language designers aren’t perfect. It would be crazy to expect them to be able to look back 5, 10, 15 years later and say “Yes, I wouldn’t change anything in the original design.” I’ve written before about my own view of C# language design mistakes, and there are plenty in Java as well (more, in fact). Some of these can be deprecated by IDEs – for instance, Eclipse can warn you if you try to use a static member through a variable, as if it were an instance variable. However, it’s still not as nice as having a clean language to work with. Again, backward compatibility is a pain…

Where is the dead-weight?

There are a two slightly different kinds of dead-weight here. The first is a communications issue: if two people currently use a certain protocol to communicate (e.g. SMTP) then in most cases both parties need to change to a particular new option before all or sometimes any of its advantages can be seen.

The other issue can be broadly termed backward compatibility. I see this as slightly different to the communications issue, even though that can cover some of the same bases (where one protocol is backwardly compatible with another, to some extent). The core problem here is “We’ve got a lot of stuff for the old version” where stuff can be code, content, even hardware. The cost of losing all of that existing stuff is usually greater (at least in the short to medium term) than the benefits of whatever new model is being proposed.

What can be done?

This is where I start running out of answers fast. Obviously having a transition plan is important – IPv6 is an example where it at least appears that the designers have thought about how to interoperate with IPv4 networks. However, it’s another example of the potential cost of doing so – just how much are you willing to compromise an ideal design for the sake of a simplified transition? Another example would be generics: Java generics were designed to allow the existing collection classes to have generics retrofitted without backward compatibility issues, and without requiring a transition to new actual classes. The .NET approach was very different – ArrayList and List<T> certainly aren’t interchangable, for example – but this allows for (in my view) a more powerful design of generics in .NET.

There are some problems which are completely beyond my sight at the moment. I can’t imagine SMTP being replaced in the next 5 years, for instance – which means its use is likely to grow rather than shrink (although probably not across the board,  demographically speaking). Surely that means in 5 years time it’ll be even further away from replacement. However, I find it very hard to imagine that humankind will still be using SMTP in 200 years. It would be pretty sad for us if that were to be the case, certainly. I find myself considering the change to be inevitable and inconceivable, at the same time.

Some technologies are naturally replaceable – or can gradually begin to gather dust without that harming anyone. But should we pay more attention to the end of a technology’s life right from the beginning? How can we design away from technological lock-in? In particular, can we do so while still satisfying the business analysts who tend to like the idea of locking users in? Open formats and protocols etc are clearly part of the consideration, but I don’t think they provide the whole picture.

Transition is often painful for users, and it’s almost always painful to implement too. It’s a natural part of life, however – isn’t it time we got better at it?

Google, here I come!

This may be a somewhat unexpected announcement for many of you, but I’m delighted to announce that as of April 7th I will be an employee at Google. (If you really needed to follow that link to know who Google are, I have no idea what you’re doing reading my blog in the first place.)

This may seem an unusual move for someone who has been concentrating on C# for a while – but I view it as a once-in-a-lifetime opportunity to work with some of the smartest engineers around on hugely exciting projects used by billions of people. Strangely enough, at the moment I don’t really know how to build an application which supports billions of users. I’m looking forward to finding out.

This is likely to mean an end or at least a temporary hiatus in my professional use of C# – but that doesn’t mean my interest in it will die out. I’m still looking forward to seeing what’s in C# 4 :) I’m likely to be using Java for my day-to-day development, which is at least familiar ground, and as Josh Bloch works at Google I’ll be in good company! (Do you think he’d trade a copy of the new edition of Effective Java for a copy of C# in Depth?)

I’ll be working in the London office, but will spend the first two weeks in Mountain View. I don’t yet know what I’ll be working on, but many of the projects in London are in the mobile space, so that seems a reasonable possibility. Whatever project I end up on (and it’s likely to change reasonably frequently) it’s hard to imagine that life will be dull.

It seems fitting to thank my wife Holly at this point for supporting me in this – my daily commute will be significantly longer when I’m at Google, which means she’ll be doing even more of the childcare, not to mention coping on her own while I’m in sunny California. She’s been a complete rock and never once complained about the extra burden I’ll be putting on her.

So, I’m currently a mixture of terrified and extremely excited – and I can’t wait to fly out on Sunday…

We’ve shipped! C# in Depth is out now (ebook)

I’m hugely pleased to announce that C# in Depth is now available in finished form as an ebook. The hard copy will ship in about three weeks. Thanks to everyone who’s been involved, particularly the folks from Manning, Eric Lippert (for both the tech review and the very kind comments!) and all the peer reviewers. Oh, and Holly for putting up with my lack of spare time over the last year :)

The work isn’t over yet, of course… I’ve still got to write up the specification map on the book’s web site, and I’ll probably end up writing various articles for magazines etc for marketing purposes. Still, a very significant milestone!

I really hope to write another book at some point – but I think I’ll be taking a few months off first…

Book review: C# 3.0 in a Nutshell

Resources:

Introduction

The original C# in a Nutshell was the book I cut my C# teeth on, so to speak. Basically I read it (well, the bits which weren’t just reproductions of MSDN – gone in this edition, thankfully), played around in Visual Studio, and then started to answer questions on the C# newsgroup. (That’s a great way of learning useful things, by the way – find another person’s problem which sounds like it’s one you might face in the future, then research the answer.)

Five and a half years later (Google groups suggests I cut my teeth in the C# newsgroup in August 2002) I’ve just been reading C# 3.0 in a Nutshell, by Joe and Ben Albahari (who are brothers, in case anyone’s wondering). Unsurprisingly, there’s rather a lot more in this version :) I bought the book with a fairly open mind, and as you’ll see, I was quite impressed…

For the purposes of this review, I’ll use the “Nutshell” to mean “C# 3.0 in a Nutshell”. It’ll just make life a bit easier.

Scope

Nutshell covers:

  • C# 1.0 to 3.0 (i.e. it’s “from scratch”)
  • Core framework (serialization, assemblies, IO, strings, regex, reflection, threading etc)
  • LINQ to XML
  • A bit of LINQ to SQL while discussing LINQ in general

It explicitly doesn’t try to be a “one stop shop for every .NET technology”. You won’t find much about WinForms, WPF, WCF, ASP.NET etc – to which my reaction is “hoorah!”. I’ve probably said it before on this blog, but if you’re going to use any of those technologies in any depth, you really need to study that topic in isolation. One chapter in a bigger book just isn’t going to cut it.

That’s the scope of the book. The scope of my review is slightly more limited. I’ve read the C# stuff reasonably closely, and dipped into some of the framework aspects – particularly those I’m fairly knowledgeable about. The idea was to judge the accuracy and depth of coverage, which would be hard to do for topics which I’m relatively inexperienced in.

Format and style

Nutshell is a reference book which goes to some lengths to be readable in a “cover to cover” style. It’s worth mentioning the contrast here with C# in Depth, which is a “cover to cover” book which attempts to be useful as a reference too. I’d expect the index of Nutshell to be used much more than the index of C# in Depth, for example – but both books can be used in either way.

As an example of the difference in style, each section of Nutshell stands on its own: there’s little in the way of segues from one section to the next. That’s not to say that there are no segues, or indeed that there’s no flow: the order in which topics are introduced is pretty logical and sometimes there’s an explicit “having looked at X we’ll now look at Y” – but it feels to me like there’s less attempt to keep the reader going. That’s absolutely right for a reference book, and it doesn’t prevent the book from being read from cover to cover – it just doesn’t particularly encourage it.

There are lots of little callout notes, both in terms of “look here for more depth” and “be careful – here be dragons”. These are very welcome, and call attention to a lot of important points.

The layout is perfectly pleasant, in a “normal book” kind of way – it’s neither the alternating text/code/text/code style of the King/Eckel book, nor the “pictures everywhere” Head First format. In that sense it’s reasonably close to C# in Depth, although it uses comments instead of arrows for annotations. The physical format is slightly shorter and narrower than most technical books. This gives a different feeling which is hard to characterize somehow, but definitely present.

Accuracy and Depth

The main problem I had with Head First C# was the inaccuracies (which, I have to stress, are hopefully going to be fixed to a large extent in a future printing). While there are inaccuracies in Nutshell, they are generally fewer and further between, and less important. In short, I’m certainly not worried about developers learning bad habits and incorrect truths from Nutshell. Again, I’ve sent my list of potential corrections to the authors, who have been very receptive. (It’s also worth being skeptical about some of the errata which have been submitted – I’ve taken issue with several of them.)

The level of depth is very well chosen, given the scope of the book. As examples, the threading section goes into the memory model and the string section talks a bit about surrogates. It would be nice to see a little bit more discussion about internationalisation (with reference to the Turkey test, for example) as well as more details of the differences between decimal and float/double – but these are all a matter of personal preference. By way of recommendation, I’d say that if every professional developer working in .NET knew and applied the contents of Nutshell, we’d be in a far better state as a development community and industry.

The coverage of C# is very good in terms of what it does – again, appropriate for a reference work. I’d like to think that C# in Depth goes into more detail of how and why the language is designed that way, because that’s a large part of the book’s raison d’être. It would be a pretty sad indictment of C# in Depth if Nutshell were a complete superset of its material, after all.

Competitive Analysis

So, why would you buy one book and not the other? Or should you buy both? Well…

  • Nutshell covers C# 1 as well as 2 and 3. The 3.0 features are clearly labelled, and there’s an overview page of what’s new for C# 3.0 – but if you know C# 1 and just want to learn what’s in 2 and 3, C# in Depth will take you on that path more smoothly. On the other hand, if you want to look up aspects of C# 1 for reference, Nutshell is exactly what you need. I wouldn’t really recommend either of them to learn C# from scratch – if you know Java to start with, then Nutshell might be okay, but frankly getting a “basics” tutorial style book is a better starting point.
  • Nutshell covers the .NET framework as well as the language. C# in Depth looks at LINQ to Objects, rushes through LINQ to SQL/XML/DataSet, and has a bit of a look at generic collections – it’s not in the same league on this front, basically.
  • Nutshell aims to be a reference book, C# in Depth aims to be a teaching book. Both work to a pretty reasonable extent at doing the reverse.

To restate this in terms of people:

  • If you’re an existing C# 1 developer (or C# 2 wanting more depth) who wants to learn C# 2 and 3 in great detail without wading through a lot of stuff you already know, get C# in Depth.
  • If you want a C# and .NET reference book, get Nutshell.
  • If you want to learn C# from scratch, buy a “tutorial” book about C# before getting either Nutshell or C# in Depth.

Clearly Nutshell and C# in Depth are in competition: there will be plenty of people who only want to buy one of them, and which one will be more appropriate for them will depend on the individual’s needs. However, I believe there are actually more developers who would benefit greatly from having both books. I’m certainly pleased to have Nutshell on my desk (and indeed it answered a colleague’s question just this morning) – and I hope the Albahari brothers will likewise gain something from reading C# in Depth.

Conclusion

C# 3.0 in a Nutshell is really good, and will benefit many developers. It doesn’t make me feel I’ve in any way wasted my time in writing C# in Depth, and the two make good companion books, even though the material is clearly overlapping. Obviously I’d like all my readers to buy C# in Depth in preference if you can only buy one – but it really does make sense to have both.

Range variables – is it just me that thinks they’re different?

Update: Ian Griffiths has a more detailed blog post about this matter.

Range variables are the variables used in query expressions. For instance, consider the expression:

from name in names
let length = name.Length
select new { name, length }

Here, name and length are range variables. In C# in Depth, I claim that range variables aren’t like any other type of variable. Eric commented that they’re not unlike iteration variables in foreach – and since then, I’ve seen both Jamie King/Bruce Eckel’s book, and now C# 3.0 in a Nutshell (which is very good, by the way), using “iteration variable” as the name for range variables. (Update as to the reason for this: they were called iteration variables until a reasonably recent version of the spec, apparently.)

Eric made the point that there are interesting restrictions on the use of iteration variables, just like there are restrictions on the use of range variables. However, that’s not enough of a similarity from my point of view. I think of them very, very differently.

When I declare an iteration variable in a foreach loop, the variable really “exists” in that method. It’s present in the IL for it as a local variable (or maybe a captured one). When I declare a range variable, it’s never a variable in that method – it’s just a parameter name in a lambda expression. It’s the shadow/promise/ghost of a variable – a variable yet to live, one which will only have a value when I start executing the query.

I suppose in that respect it’s closest to the parameter name given in a lambda expression or anonymous method – except that it’s valid for multiple parts of a query expression, carrying the value from place to place, via different “normal” parameters.

So, am I crazy for looking at range variables as a different type of beast, or is the rest of the world “wrong” on this one? :)

PostSharp and iterator blocks – a beautiful combination

I’ve blogged before about the issue of validating parameters in methods implemented with iterator blocks. Tonight, I’ve found a nice way of doing it (in common situations, anyway).

This morning I looked into PostSharp for the first time. PostSharp is an assembly post-processor, giving build-time AOP. I haven’t had any experience with AOP on .NET, but execution time AOP has always felt slightly risky to me. I wouldn’t like to say exactly why, and it may well be mere superstition – but I’m much happier with a tool modifying my code statically after a build than something messing around with it later.

So far, I’m very impressed with PostSharp. It’s a doddle to use and understand, and it has some very neat features. I thought I’d give it an interesting test, seeing what it would make of iterator blocks. I’d already seen its OnMethodBoundaryAspect, which is invoked when a method is called and when it exits. PostSharp works on the compiled code, so the method it sees with iterator blocks is the actual GetEnumerable (or whatever) method, rather than the state machine stuff the C# compiler generates to perform the actual iteration. In other words, the point at which PostSharp gets involved is precisely the point at which we want to validate parameters.

So, initially I had to write an aspect to validate parameter values. In order to make life easier for the developer, I’ve allowed the required parameter to be specified by name. I then find out the parameter position at weave time (post main build, pre execution) – the position as an integer is deserialized a single time when the real code is executed, so I don’t need to look it up from the list of parameters. This also means I can validate that the given parameter name is actually correct, and spot the error at weave time.

Here’s the code for the aspect:

using System;
using System.Reflection;
using PostSharp.Laos;

namespace IteratorBlocks
{
    [Serializable]
    class NullArgumentAspect : OnMethodBoundaryAspect
    {
        string name;
        int position;

        public NullArgumentAspect(string name)
        {
            this.name = name;
        }

        public override void CompileTimeInitialize(MethodBase method)
        {
            base.CompileTimeInitialize(method);
            ParameterInfo[] parameters = method.GetParameters();
            for (int index = 0; index < parameters.Length; index++)
            {
                if (parameters[index].Name == name)
                {
                    position = index;
                    return;
                }
            }
            throw new ArgumentException(“No parameter with name “ + name);
        }

        public override void OnEntry(MethodExecutionEventArgs eventArgs)
        {
            if (eventArgs.GetArguments()[position] == null)
            {
                throw new ArgumentNullException(name);
            }
        }
    }
}

Here’s the “production” code I’ve used to test it:

using System;
using System.Collections.Generic;

namespace IteratorBlocks
{
    class Iterators
    {
        public static IEnumerable<char> GetNormalEnumerable(string text)
        {
            if (text == null)
            {
                throw new ArgumentNullException(“text”);
            }
            foreach (char c in text)
            {
                yield return c;
            }
        }

        [NullArgumentAspect(“text”)]
        public static IEnumerable<char> GetAspectEnhancedEnumerable(string text)
        {
            foreach (char c in text)
            {
                yield return c;
            }
        }
    }
}

And here are the actual test cases:

using System;
using NUnit.Framework;

namespace IteratorBlocks
{
    [TestFixture]
    public class Tests
    {
        [Test]
        [ExpectedException(typeof(ArgumentNullException))]
        public void NormalEnumerableWithNullText()
        {
            Iterators.GetNormalEnumerable(null);
        }

        [Test]
        [ExpectedException(typeof(ArgumentNullException))]
        public void AspectEnhancedEnumerableWithNullText()
        {
            Iterators.GetAspectEnhancedEnumerable(null);
        }
    }
}

The NormalEnumerableWithNullText test fails, as I knew it would – but the AspectEnhancedEnumerableWithNullText test passes, showing that the argument validation is eager, despite the actual iteration being lazy. Hooray! Not only that, but I like the declarative nature of the validation check – it leaves the actual implementation of the iterator nice and clean.

The thing which probably impresses me most about PostSharp is how simple it is to use. This code literally worked first time – despite me being a complete novice with PostSharp, having watched the 5 minute video and that being about the limit of my investigation so far. It certainly won’t be the end of my investigation though.

So, why did none of you smart people who have no doubt known about PostSharp for ages actually tell me about it, hmm? I’m always the last to find out…

Book review: Head First C#

Important "versioning" note

This review tackles the first printing, from November 2007. Since then, the book has undergone more printings, with errata being fixed in each printing. I believe most or possibly all of the errors listed below are now fixed – although I don’t yet know whether there are more lurking. I have a recent (late 2009) printing which I intend to review when I have the time – which means it’s likely to be in 2010Q2 at the earliest, and more likely later. I don’t know how much editing has been done in terms of best/bad practice. I have left the review as I originally wrote it, as it is a fair (to my mind) representation of that first printing. (This is something to bear in mind with all reviews, mind you – check when they are written, and ideally which version they’re written about.) I’m reluctant to go as far as recommending the latest printing without actually reading it, but I’m fairly confident it’s a lot better than the first printing.

You should also bear in mind that many C# books – including Head First C#, the Essential C# book referenced in the review and my own book – are currently being revised for C# 4. If the HFC# 4 book comes out before I’ve had time to review the latest printing of the previous edition, I will probably go straight for that instead.

Resources

Introduction

This is a tough review to write. We already know I’m biased due to being in some way in competition with Andrew Stellman and Jennifer Greene (the authors), but I’m also not a huge fan of the Head First series in general. It doesn’t coincide with how I like to learn. I feel patronised by the pictures and crosswords rather than drawn into them, etc. However, I’m very aware that it’s really popular – lots of people swear by it, and I know that my own tastes are far from that of the majority. There’s also the fact that Head First C# really isn’t aimed at me at all – it’s aimed at people who don’t know C# to start with.

Funnily enough, that’s why I really, really wanted to like the book. It’s not actually competing with C# in Depth except for readers who have no idea what either book is like. I can’t imagine many people being in the situation where both books would be appropriate for them. I do want to find a good book for someone to learn C# from though, from scratch. My own book is completely unsuitable for that purpose. Essential C# by Mark Michaelis is my current favourite, I think – although I confess to not having read it all. It also doesn’t cover C# 3 – although I’m not sure whether I’d actually want to cover C# 3 for a reader who doesn’t know 1 or 2.

Anyway, I had hoped that Head First C# (HFC# from now on) would become a good recommendation – and part of this is because Andrew and Jennifer have certainly been very friendly on the blog and email. I don’t like being nasty, and I know how I’d feel reading a review like this of my book. Unfortunately, it falls a long way short of being worthy of recommendation, and I can’t really find a way of hiding that.

I "read" the whole book today. Now, it’s over 700 pages and I’ve been at work, so clearly I haven’t read every word of every page. I have mostly skimmed the code examples – vital for learning, I’ll gladly admit, but not hugely necessary for getting the gist of the book. I skimmed over the graphics section particularly quickly, being more interested in the language and core library elements than UI topics. With that in mind, let’s look at what I did and didn’t like:

The Good

This is going to be a broadly negative review, but it’s not like the book is without merit. Here’s what I liked:

  • Many of the examples were very nicely chosen. I liked the party organizer (chapter 5) and the room topology (chapter 7) ones in particular.
  • You really do get to create some nifty WinForms applications. I wouldn’t like to claim you really understand everything about them by the end of it, but I can see how they’re engaging.
  • Annotations in listings are a good thing in general, and I do like the handwritten feel of them.
  • Q&A is a good idea for a book like this. Predicting natural questions is an important part of
  • After a slightly scary chapter 1 (building an addressbook using a database without actually knowing any C#) it does start from the real beginning, to some extent.
  • The pictures in this book were actually not annoying, to my surprise. Some were even endearing. Even to me.
  • Apparently other people really like this book. It’s got loads of 5-star reviews on Amazon, and apparently it’s been selling fantastically well. From the posts in the book’s forum, people are really getting engaged, which can only be a good thing.

The Bad…

Formatting

I know the Head First series is all about its wacky style, etc – but I still find it can take a while to work out how you’re meant to navigate through the page. Often bits of the page do require a certain order, but that order isn’t always obvious.

My main problem with the formatting wasn’t nearly as general as that though, and I have to admit it’s a bit obsessive-compulsive. It’s the quotes in the code. They’re not straight. I’ve never seen an IDE which tries to work out "curly quotes" in code, and I hope I never do. When you’re used to seeing code in a real IDE, seeing curly quotes just looks wrong. It’s jarring and distracts from the business of learning.

Oh, and K&R bracing sucks. I know it makes the book shorter, but it makes it harder to read too. Just a personal opinion, of course :)  (Having said which, the bracing style is inconsistent through the book anyway.)

"Top-down" learning

I’m unashamedly a "bottom-up" person when it comes to learning. I like to get hold of several small building blocks, understand them to a significant degree, and then see what happens when I put them together. This book prefers the "show it all and explain some bits as we go along" approach, reasonably frequently mentioning how awful it is that most technical books try to start with console applications which don’t need lots of libraries before you even know what a library is.

I suspect this top-down way does indeed get people going much more quickly than my preferred style. (Remember, this is my preferred reading style, not just writing style. It’s no coincidence that I happen to write like I read though.) However, I believe there’s a great danger of people ending up as cargo-cult programmers. (I know I refer to that blog entry a lot. It happens to rock, and be relevant to much of why I write how/what I write.)

It’s true that top-down learning gets you doing flashy stuff more quickly than bottom-up. Whether that’s more fun or not depends on what appeals to you – I get a great sense of joy from thinking about a difficult topic and gradually understanding it, even if I have nothing to show for it beyond console apps which do little but sort numbers etc.

It feels like there ought to be a middle way, but I’ve no idea what it is yet. The "show something cool in chapter 1 and then go back over the details" approach favoured by the vast majority of technical books (including mine, to some extent) isn’t what I’m thinking of. Definitely something to think about there.

Incompleteness

I hadn’t expected this book to be in great depth, despite the quote on the back cover: "If you want to learn C# in depth and have fun doing it, this is THE book for you." I’d expected a bit more than this, however.

When I said earlier on that HFC# wasn’t competing with C# in Depth, I mentioned audiences. There’s something to be said about material as well though. Let’s look at the bulk of my book – chapters 3-11, which deal with the new language features from C# 2 and 3. Here’s how much is covered in HFC#:

  • Generics: generic collections get mentioned, but there’s no real explanation of how you’d write your own generic types. Generic methods aren’t mentioned. Type constraints, default(…), variance (or lack thereof), type inference – all absent.
  • Nullable types: not mentioned as far as I can see, including an absence of the null-coalescing operator.
  • Delegates: there’s a chapter on delegates which deals entirely with events and callbacks. No mention of their use in LINQ. No use of method group conversions, anonymous methods or support for variance. (In fact, there are some false claims around that, saying that the signature of an event handler has to match the delegate type exactly.)
  • Iterator blocks: IEnumerable<T> is mentioned, but IEnumerator<T> is never shown as far as I remember, and iterator blocks certainly aren’t shown. It mentions that you could implement IEnumerable<T> yourself, but doesn’t give any hints as to how, or what would be required.
  • Partial types: covered, albeit only mentioning classes. No sign of partial methods.
  • Static classes: covered to some extent, but without explaining that they prevent you from attempting to use the class inappropriately, or other details like the absence of constructors (not even a default one).
  • Separate getter/setter access: covered
  • Namespace aliases: not covered
  • Pragma directives: not covered
  • Fixed size buffers: not covered
  • InternalsVisibleTo: not covered (not really a language feature though)
  • Automatically implemented properties: covered, but without mentioning (AFAICR) that a hidden backing field is generated for you
  • Implicit typing: one call-out and a somewhat inaccurate Q&A
  • Object/collection initializers: covered, but without noting that you can remove the brackets from parameterless constructor calls
  • Implicitly typed arrays: not covered
  • Anonymous types: mentioned in a single annotation, but inaccurately. Used in a few query expressions.
  • Lambda expressions: not mentioned (in a book which "covers" C# 3.0. Wow.)
  • Expression trees: not mentioned
  • Changes to type inference/overloading: hard to explain a change to something which isn’t covered to start with
  • Extension methods: covered
  • Query expressions: covered to a very limited extent. No explanation of query translation. No explanation of query continuations (although grouping is always shown with continuations). No sign of multiple "from" statements or "join into".

See why I don’t think our books are really competing? Now, a lot of this really is absolutely fine. The details of generics don’t really belong in an introductory text – although some more information would have been welcome. Pragmas and fixed size buffers would have been completely out of place. Would it be too much to ask for some discussion of anonymous methods and lambda expressions though, particularly as lambda expressions really do make LINQ possible?

Maybe I’m being too harsh, looking at just C# 2 and 3 features (which is pretty much all my book does). Here are some things which would have counted as being missing had the book been published in 2002:

  • typeof(…)
  • Casting ("as" doesn’t count as a cast in my view – it’s an operator)
  • lock
  • volatile
  • Explicit interface implementation
  • The conditional operator (x ? y : z)
  • Hiding members with "new" instead of "override"
  • readonly
  • The sealed modifier on methods (not just classes)
  • ref/out/params modifiers for parameters
  • continue, goto, and break (break is mentioned for switch/case, but only there)

Maybe going into the memory model would have been a bit much (without which volatile would be pretty pointless) but not to even mention threading (and lock) feels a little worrying – especially as Application.DoEvents is abused instead. Explicit interface implementation is relatively obscure – but readonly fields? typeof(…)?

Fine, it’s an introductory text – but that means it’s a bad idea to talk about "mastering" LINQ and claiming that "by the time you’re through you’ll be a proficient C# programmer, designing and coding large-scale applications". Those quotes probably aren’t the authors’ fault, to be honest – and marketing has a way of exaggerating things, as we all know. But no-one should be in any doubt that this is far from a complete guide to C#.

Errors (please see update at the end of the post, and note at top)

This is by far my biggest gripe with the book. It’s probably coloured the whole review – things which I could have forgiven otherwise have been judged more harshly than they would have been if the entire book had been accurate. Accuracy is what I demand from a book above all else, partly because it’s not obvious to the reader when it’s absent. No-one could reasonably read the book without realising that they’re getting a top-down approach, or what the formatting is like, or that there are going to be crosswords etc. However, a reader has little to benchmark accuracy against unless the book is internally consistent.

Now, I’m unfortunate enough to have the first edition (November 2007). The book is currently undergoing its third printing, i.e. it’s had two rounds of corrections. These are listed in the errata and I’ve tried to take them into account – although there’s no way that a carefully reviewed and edited book should need that many corrections in such a short space of time. I will concede that a Head First book is likely to be much harder to get right than a "normal" book though, due to all the funky formatting.

Typos don’t worry me too much. It’s core technical errors which really bother me. It’s almost as if someone had taken my list of classic "myths" and decided to taunt me. I half expected "objects are passed by reference" to be in there – but as ref/out parameters aren’t covered at all, it’s not. Here are some of the worst/most amusing culprits though:

  • Claiming that string is a value type (in several places). Oh, and object, once.
  • Claiming that the range of sbyte is -127 to 128 (instead of -128 to 127). Same kind of mistake with short.
  • Constantly using field and property as if they were interchangable. They’re not, they’re really, really not. Just because they’re used in a similar way doesn’t mean you can be this loose with the terminology.
  • Claiming that C# "marks objects for garbage collection". In fact, for the first 6/7ths of the book there’s a strong implication that garbage collection is done in a deterministic way; that objects are immediately collected when the last reference is lost. We do eventually find out that it’s non-deterministic (although that explanation is also flawed) but by then it may well be too later for the reader. More on this in a minute.
  • Claiming that methods and statements always have to live in classes. Funny how structs can have behaviour too…
  • Claiming that "objects are variables" (in a heading, no less). I know from experience that trying to accurately describe the interplay between objects, variables, and their values is tricky – but even so…
  • Writing a hex dump utility using StreamReader – broken by definition, given that hex dump tools are used to show the binary contents of files, and StreamReader is meant to read text, decoding it as it goes.
  • Claiming that structs always live on the stack.
  • Expanding WPF as "Windows Presentation Framework"

These are all errors which have made it through not just technical review, but two rounds of post-publication editing. It’s possible that some of the errors on my list of about 60 (ignoring typos for the most part) are in the errata and I missed them (I did try to check them all) – but really, I shouldn’t have been able to find that many in the first place, even if they have been corrected. I’m worried if a C# book author believes that a char has 256 possible values, for instance. I know that the author is wrong, and to check the errata (this one has indeed been fixed) but I suspect many first edition readers will never look at the errata.

Now, there are errors and there are bad practices…

Bad practice through example

I know we don’t end up writing production code as book examples. Indeed, Eric mentioned this in chapter 1, where I’d left a few things out which I would normally consider as best practice: making a type sealed, making fields readonly etc. I left extra modifiers out for simplicity. I can understand that. I can also understand using public fields until properties have been explained but:

  • There’s no reason to use poorly named variables/parameters, including Pascal-cased local variables
  • Writing loops like for (int i=1; i <= 10; i++) instead of the more idiomatic for (int i=0; i < 10; i++). C# is 0-based in many ways, but often the authors seemed to really wish it were 1-based.
  • Continuing to use public fields even after explaining how they’re not really a good idea. (Well, sort of explaining that. There’s a frequent implication that they’re not so bad if other classes really need to be able to access your data. It’s a very long way from my preferred policy of no non-private variables whatsoever.)
  • String concatenation in loops with nary a mention of StringBuilder
  • Bizarre combination of "is" and "as", using "as" to perform the conversion instead of casting. If you’re going to use "as", do it up front and compare with null to start with…
  • Advising leaving out braces for single line for/if statements. The code which is left in these examples is unreadable, IMO. You have to really concentrate to see what’s in and what’s out.
  • Advising to stick with the absolutely abhorrent "convention" (aka laziness, and leaving things as VS creates them) of naming event handlers with things like button1_Click. No. Name methods with what they do, then the event hookup code will make it obvious – and it makes it clearer where you can reuse a single method for multiple events.
  • Repeatedly declaring enums within classes as if you couldn’t write them as top-level types. Oh, and messing up the naming conventions there, too.
  • Showing a mutable struct without explaining that this should always be avoided is a bad idea.

I could go on. I’ve got pages of notes about this kind of thing (and this is only after owning the book for less than a day, don’t forget) but I think you get the message. Note: some of these things are definitely a matter of opinion, such as bracing style. Some other things are so widely regarded as a bad idea that I can’t see much defence for them.

Examples matter. I don’t expect to see production code, and I understand that sometimes for teaching purposes best practices will take second place – but where it wouldn’t hurt to use best practice, please do!

Likewise telling the truth from the start matters. It’s very hard to correct bad habits and incorrect impressions. If you state that "When we set lucky (a variable) to null, it’s no longer pointing at its object, so it gets garbage collected" then people will not only be potentially confused about whether it’s the variable or the object which gets garbage collected, but they’ll get the impression that it’s garbage collected immediately. Waiting 476 pages to correct that impression is a bad idea.

Public fields are another example. I’ve mentioned that I can see their usefulness before we’ve encountered properties – but even so, surely it would have been worth explaining immediately that we’re going to hide them as soon as possible.

Conclusion

You may have gathered by now that I’m not a fan of the book ;) I suspect a lot of this review has come off as a rant, which is a pity. That tends to happen when I get on my technical high horse, which I guess I’ve done here. The fact is, I’ve spent a lot of today feeling deeply saddened. I wouldn’t be surprised to find that this is the best-selling C# book of 2008 – which means we’ll get a lot more people on the newsgroup with some very odd ideas about how C# works. That’s the trouble – I’ve seen what happens when people are fed the "structs live on the stack" myth. I’ve seen how easily people can believe that strings are value types, and that it doesn’t really matter if you use a StreamReader for binary data and then cast chars to bytes. It causes trouble.

I’m reasonably sure the world could do with a good introductory book on C#. HFC# has convinced me that such a book could be fun and have pretty pictures. But HFC# isn’t quite it. (And no, I don’t plan on writing it either.)

Feedback

I gave an advance copy of this review to Andrew, who has replied remarkably politely and pleasantly (and quickly). He’s a true gent, giving a really thorough reply when I suspect most authors (perhaps including myself) would either have given short shrift to a review like this, or possibly ignored it competely and hoped it would go away.

He believes I was looking too much for a complete reference rather than an introductory text. I would say I wasn’t looking for completeness, but a better judge of what should be in a C# book (I’d have preferred ref/out to be covered, but would be happy to lose the section on GDI+ double buffering, for instance). I specifically don’t want an actual reference book if I’m going to recommend it to people to learn from – but I may be biased towards a reference style as that’s what I personally tend to learn from.

It was really the errors that affected this review more than anything though, and while a very few of them could be debated (whether an implicit reference conversion to a base class counts as upcasting, for instance – I’d only include explicit upcasting) many others are undeniable and really shouldn’t have made it through the review process. It’s possible that I’ll come back to HFC# in a year’s time and be more impressed by it, but I suspect the aversion to errors will overcome any mellowing towards the style :(

Update (22nd March 2008)

Andrew continues to amaze me in terms of taking this review in his stride. He’s now looking through the errors I found, and many should be fixed in the next printing. Bear in mind that without a lot of the errors, I would have had a more positive view from the start.

In short, I’m still not quite convinced I’d recommend the book, but my opinion has certainly mellowed (anticipating the error fixing, of course).

Book review/preview: “C# Query Expressions And Supporting Features in C# 3.0” (Eckel/King)

Introduction

Let me make one thing very clear before anything else: this is a preview. Bruce Eckel has made the preview of what appears to be part of a bigger book available free from his website. The book is by Bruce Eckel and Jamie King, and the preview available (1.0 at the time of writing) covers the following topics:

  • Extension methods
  • Implicitly typed local variables
  • Automatic properties
  • Implicitly-typed arrays
  • Object initializers
  • Collection initializers
  • Anonymous types
  • Lambda expressions
  • Query expression translation

For obvious reasons, this had me slightly worried when I first looked at it – it’s clearly a reasonably direct competitor to C# in Depth. There’s not a lot of C# 3 which isn’t covered here. I can only think of these things off-hand:

  • Expression trees
  • Object initializers setting properties of embedded objects
  • New type inference rules
  • New overload resolution rules

I was surprised to see expression trees not get even a mention. I’m sure they’ll be covered elsewhere in the full book, but I personally think it’s worth introducing them at the around same time as lambda expressions. (It would be odd for me to have any other view, given the location in C# in Depth!) I don’t know whether the new rules for type inference and overload resolution will be covered elsewhere. If they’re going to be covered but the authors haven’t done the writing yet, we should all feel sympathy for them. That section (9.4) was the hardest one in the whole book for me. It may be possible to describe all of the rules in a way which doesn’t make both reader and writer want to tear their hair out, but I have yet to see it.

I don’t know exactly what the bigger book will cover, or how it will be published, or whether it will be available in preview form, etc. From here on in, when I say “the book” I mean “the preview bit”.

Format

After the TOC and introductory material, the book basically consists of 4 things:

  • Headings (occasional)
  • Explanatory text
  • Code
  • Exercises

There are no diagrams or tables as far as I can see. The main body of the book (P7-137) sets exercises quite frequently (there are 54 of them), and the answers form P138-233, including brief explanations.

The code is always complete, including using directives, a Main method (for non-library classes) and output. A lot of the time the code essentially forms unit tests to demonstrate the features (a technique we used in Groovy in Action, by the way). The authors have their own build system which not only runs the tests, but also allows comments to express pieces of code which shouldn’t compile. The output is also checked, i.e. that running the code produces the output in the book.

There are pros and cons to this approach. For those who haven’t read any of my book (what are you waiting for? The first chapter is free!) I personally use a tool I wrote called “Snippy” which allows me to include short but complete examples without using directives and Main declarations appearing everywhere. My comments on this book’s approach:

  • I’d be surprised to see any misleading/broken examples. (There’s one piece of code which doesn’t go quite as far as I believe it should, but that’s a different matter.)
  • It encourages unit testing.
  • It leads to longer code with repetition (Main etc).
  • The build system leads to non-standard comments, like //c! to indicate a non-compiling line and //e! to indicate where an exception should be thrown.
  • There’s a little too much text dealing with the build system – it’s distracting

The “long code” issue has been dealt with by squeezing a lot of code into short spaces – K&R bracing and very little whitespace. Personally I find this really quite tricky to read, to the extent that I ended up skipping a lot of the code examples. I’ve tried to keep all my code examples pretty short, and none of them are over a page. (That was an unstated goal at the start of the project, in fact.)

Now, how much code do you like to see in a book? That’s very much a personal decision. I happen to like quite a lot of explanatory text – so that’s how I write, too. In this book I reckon (and it’s only a complete guess) about 50% of the book is code, 35% is prose and 15% is exercise. This quite possibly pose a challenge for me as a reader if I didn’t already know the topic. However, for other readers it’s probably spot on.

What this book doesn’t have (fortunately) is lots of examples which go on for pages and pages, producing a complete application with little explanation. I’ve seen that too often – and a lot of the code simply doesn’t teach me anything. I’ve never particularly liked the “build a complete application” approach to books, partly because it doesn’t actually mean that all the bases are covered (you don’t see every issue in every app) and it does mean there’s a lot of turn-the-handle code which isn’t relevant to the topic being taught. It can be a useful technique in some situations, but I like it when irrelevant code is omitted (and is just available for download).

The other personal question is whether or not you like exercises. I certainly believe in trying out new things as you read about them, but exercises don’t really fill that need in an ideal way for me. I like to try to apply a new technique to an existing bit of code, or an existing database for instance – and obviously the author has no way of knowing that. Now, that only says something about me, not about the value of exercises. This book has been used for teaching in a university, and I suspect the exercises have been appropriate in that setting. Note for future consideration (and reader feedback): should I include exercises in any future books I might write? Should I create some for the C# in Depth web site?

Style

(Some of this might reasonably count as format as well – it’s a blurry line.)

I have a consciously “light” style. I write in the first person and try to include opinion and the occasional joke or at least lighthearted comment. (Footnote 1 in chapter 3 is my favourite, for reference.)

Eckel and King’s book is more like a textbook. The authors haven’t allowed their personalities to come through in the text at all – and it’s clearly a deliberate decision. Good or bad? Hard to say – it depends on the context. The word “textbook” is the key here, for me – I can’t remember textbooks having any personality when I was a student, so if they’re going for that market it’s spot on. In the “professional developer” market it may have a harder time. Again, personally I’m a fan of a bit of personality peeping through the text – although it has to be firmly controlled, and it’s better to err on the side of caution. I’ve read some books which seem to be all about the author’s personality, without letting the subject matter have a look-in.

I do think more headings (of varying sizes, if you see what I mean) and the occasional diagram would be helpful, though. It’s a pretty unrelenting code-text-code-text-exercise-code-text-code-text mix. The code is all just “there” with no headings, nothing to visually break things up. (It’s not actually run-on with the text – it’s clear where text stops and code starts, and there’s even a helpful vertical line down the side of the code – it’s just that there’s nothing to make you take a mental breath.) This could be due to it being a preview – it’s possible that more formatting will occur later on. If that wasn’t the plan, I’d encourage the authors to at least consider it. (Wow, see how easy it is to slip into arrogance? Must make a memo to give Joel Spolsky some notes on writing later ;)

Content

The content is pretty full-on, and very language-focused. As an example, I suspect few books on C# 3 will go into any detail about transparent identifiers in query expressions. In my book I explain them for one particular clause (“let”) and then just mention when the compiler will introduce one for other clauses. Eckel and King’s book gives full exampes of translation for all the clauses available, as far as I can tell.

That’s just an example – and possibly an extreme one – but this book does go into a reasonable amount of depth when it comes to the facts. (There were also two items I wasn’t aware of: the option of explicitly stating that an ordering is ascending, and the ability to create “extension delegates“. They’re not huge omissions in my text (and at least I’ve now got notes for them), but the fact that I missed them and these guys didn’t is (to me) an indication of their thoroughness.

Now, having dealt with the plain facts, there’s not a lot of opinion in the book – pieces of text which encourage the reader to think about why C# has changed the way it has, or the best way to take advantage of those changes. Again, this is a valid approach for a textbook – especially one used in conjunction with a course where the lecturer can talk about these things – but I suspect the non-academic market likes guidance.

The accuracy level seemed pretty high to me. Not perfect, but then I don’t expect mine is either, even with Eric’s thorough eye. In everyone’s interests, I’ve mailed the authors my specific comments and nitpicks – as the book is still at a preview stage, corrections can be made relatively easily, I expect.

Conclusion

Obviously I can only comment on the book as I’ve seen it so far – I’ve no idea whether the other chapters will be more framework-focused. However, it’s good to see another book that tries to “go deep” like mine does. While this clearly makes it competition in many ways, I think we’re aiming at different audiences. If I’m right in my assumption that this is trying to be a textbook, there may be little overlap in potential market. (I suspect the same will be true of Head First C#, which is likely to be my next review – but for the opposite reason. I suspect I’ll find that HFC# is more aimed at beginners – something that certainly couldn’t be said of this book or mine.)

Overall this is a very solid text, in many senses. It’s not the easiest book to follow due to its style, but it’s detailed and accurate. Given a choice between the latter and the former, I’d always choose the latter for anything I’d want to refer back to – and this book certainly counts as a good reference for query expressions. Obviously I’m hoping people find my style appealing and that I’m detailed and accurate, but I can’t give that judgement.

As a final word – if you haven’t downloaded it yet, why not? It’s a totally free download of only just over a meg. I don’t think I even had to register anywhere to get it. Reading other work is useful for me as a writer, but there’s no need for you to trust my judgement, nor indeed would it be wise to do so. If you missed it before, I’ll even save you scrolling up for the download link.

I’d be interested to hear whether your opinions coincide with mine. If you’ve read my book and can compare and contrast, so much the better. I’ve let the authors know that this review is coming, so I suspect they’ll be checking here for feedback. (They’d be foolish not to, and I have no reason to believe they’re fools.)