C# in Depth – it’s really real

On Saturday, when I returned from Mountain View, there were two boxes waiting for me. Guess what was inside…

So yes, it really exists, it’s slightly slimmer than expected – and even Amazon have it for sale now instead of as a “pre-release”. Apparently they’ll have stock on May 3rd. Now that it’s on “normal” sale, it’s open for reviews. So, any of you who happen to have read the eBook and wish to make your feelings clear on Amazon or Barnes and Noble, please feel very free… (Really, I’d appreciate it. But please be honest!)

On a different note, it’s just under two years since I first talked to Manning about Groovy in Action. Since then we’ve had two more sons, I’ve changed jobs twice, and written/co-written two books. (Holly’s probably written about 10… I’ve lost track of just how many she’s got on the go at any one time.) Wow. It feels more like five years. Who knows what the next two years will bring?

Programming “in” a language vs programming “into” a language

I’m currently reading Steve McConnell’s Code Complete (for the first time – yes, I know that’s somewhat worrying) and there was one section was disturbed me a little. For those of you with a copy to hand, it’s in section 4.3, discussing the difference between programming in a language and programming into a language:

Programmers who program “in” a language limit their thoughts to constructs that the language directly supports. If the language tools are primitive, the programmer’s thoughts will also be primitive.

Programmers who program “into” a language first decide what thoughts they want to express, and then they determine how to express those thoughts using the tools provided by their specific language.

Now don’t get me wrong – I can see where he’s coming from, and the example he then provides (Visual Basic – keeping the forms simple and separating them from business logic) is fine, but he only seems to give one side of the coin. Here’s a different – and equally one-sided – way of expressing the same terms:

Programmers who program “in” a language understand that language’s conventions and idioms. They write code which integrates well with other libraries, and which can be easily understood and maintained by other developers who are familiar with the language. They benefit from tools which have been specifically designed to aid coding in the supported idioms.

Programmers who program “into” a language will use the same ideas regardless of their target language. If their style does not mesh well with the language, they will find themselves fighting against it every step of the way. It will be harder to find libraries supporting their way of working, and tools may well prove annoying. Other developers who come onto the project later and who have experience in the language but not the codebase will find it hard to navigate and may well accidentally break the code when changing it.

There is a happy medium to be achieved, clearly. You certainly shouldn’t restrict your thinking to techniques which are entirely idiomatic, but if you find yourself wanting to code in a radically different style to that encouraged by the language, consider changing language if possible!

If I were attacking the same problem in C# 1 and C# 3, I could easily end up with radically different solutions. Some data extraction using LINQ in a fairly functional way in C# 3 would probably be better solved in C# 1 by losing some of the functional goodness than by trying to back-port LINQ and then use it without the benefit of lambda expressions or even anonymous methods.

Accents and Conventions

That’s just between different versions of the same language. Between different actual languages, it can get much worse. If you’ve ever seen Java code written in a C++ style or vice versa, you’ll know what I mean. I’ve previously referred to this in terms of speaking a language with an accent – you can speak C# with a Java accent just as you can speak French with an English accent. Neither is pleasant.

At the lowest level, this is likely to be about conventions – and I’m pretty sure that when Steve writes “Invent your own coding conventions, standards, class libraries, and other augmentations” he doesn’t actually mean us to do it in a gratuitous fashion. It can be worth deviating from the “platform favoured” conventions sometimes, particularly if those differences are invisible to clients, but it should always be done with careful consideration. In a Java project I worked on a few years ago, we took the .NET naming conventions for interfaces (an I prefix) and constants (CamelCasing instead of SHOUTY_CAPS). Both of these made the codebase feel slightly odd, particularly where Java constants were used near our constants – but I personally found the benefits to be worth the costs. Importantly, the whole team discussed it before making any decisions.

Design Patterns

At a slightly higher level, many design patterns are just supported much, much better by some languages than others. The iterator pattern is a classic example. Compare the support for it from Java 6 and C# 2. On the “client” side, both languages have specific syntax: the enhanced for loop in Java and the foreach loop in C#. However, there is one important difference: if the iterator returned by GetEnumerator implements IDisposable (which the generic form demands, in fact) C# will call Dispose at the end of the loop, no matter how that occurs (reaching the end of the sequence, breaking early, an exception being thrown, etc). Java has no equivalent of this. Imagine that you want to write a class to iterate over the lines in a file. In Java, there’s just no safe way of representing it: you can make your iterator implement Closeable but then callers can’t (safely) use the enhanced for loop. You can make your code close the file handle when it reaches the end, but there’s no guarantee that will happen.

Then consider the “server” side of the iterator – the code actually providing the data. Java is like C# 1 – there’s no specific support for implementing an iterator. In C# 2 and above, iterator blocks (i.e. methods with yield statements) make life much, much easier. Writing iterators by hand can be a real pain. Reading a file line by line isn’t too bad, leaving aside the resource lifetime issue – but the complexity can balloon very quickly. Off by one errors are really easy to introduce.

So, if I were tackling a project which required reading text files line by line in various places, what would I do? In Java, I would take the reasonably small hit of a while loop in each place I needed it. In C# I’d write a LineReader class (if I didn’t already have one!) and use a more readable foreach loop. The contortions involved in introducing that idea into Java just wouldn’t be worth the effort.

At a much higher level, we get into whole programming styles and paradigms. If your natural inclination is to write imperative code, you’re likely to create a mess (or get very frustrated) in a functional language. If the problem really does call for a functional language, find someone else to help you think in a more functional way. If the problem suits imperative programming just as well as it does functional programming, see if you can change the environment to something more familiar.

Conclusion

I’m not suggesting that Steve’s point isn’t valid – but he’s done his readers a disservice by only presenting one side of the matter. Fortunately, the rest of the book (so far) is excellent and humbling – to such a degree that this minor quibble stuck out like a sore thumb. In a book which had more problems, I would probably barely have even noticed this one.

There’s another possibility, of course – I could be competely wrong; maybe I’ve been approaching problems from a restrictive viewpoint all this time. How about you?

Book review: User Interface Design for Programmers (Joel Spolsky)

Resources

Introduction

This will be a brief review. It’s a short book, after all – a mere 134 pages for the main body of the book. That’s not a bad thing at all, mind – quite the opposite. This book is a quick read, and it’s easy to read the whole thing without skimming over anything.

I’m not good at UI design, either on the web or for rich client apps. My applications tend to be functional but not pretty – and even in “functional” terms I strongly suspect users don’t always find my choices intuitive. Fortunately, so far in my professional career I haven’t actually had to work on many front ends – but should that time come, this book will prove a good starting point for me.

The Good

If you want to know all the latest and greatest technical tricks for user interfaces, this isn’t the book for you. If you’ve taken a course in usability or HCI, this probably isn’t the book for you. It really is just introductory material – but it certainly seems to be a good way of starting to think in the right way when designing user interfaces. It tries to educate in terms of an approach to take rather than the details of what to do.

There are 18 bitesize chapters, almost all of which contain many illustrations and screenshots. The production quality of the book is fabulous – it’s full colour on glossy paper, and the screenshots are all large enough to easily illustrate the point being made. It must have cost APress a fortune to print, but the results are well worth it, in my view. It’s not often that production is of such a high quality as to really make an impression (for me, anyway) but it really does grab attention. Oh, and I only spotted a single typo in the single book. I didn’t find technical errors at all, but then the material isn’t really directly technical to start with – and I’m not an expert on the topic.

The tone is conversational throughout, which may not be to everyone’s taste but is absolutely fine by me. It’s lighthearted without ever being patronising, and it’s clear that Joel is deadly serious about the subject matter itself. There are lots of examples of what works and what doesn’t – but always with clear, purposeful commentary rather than just as a UI hall of shame.

Joel describes how best to consider users, and how they’re likely to think. He focuses on the limitations of the user, in terms of how they’re unlikely to read documentation or even on-screen instructions; how requiring users to aim a mouse accurately is basically cruel; how the user’s model of your program probably isn’t the same as yours – and more. None of this is actually belittling towards the user – it’s just a case of different perspectives. He makes it crystal clear that the user has launched an application to accomplish a task, rather than for the joy of the application itself – so it’s understandable that if the user can’t accomplish their task simply (without excessive training), it’s generally the application’s fault rather than the user’s.

Although that summarises the basic message of the book, it doesn’t do justice to the close attention paid to exactly how those ideas manifest themselves in real world situations. I’m afraid it’s one of those books where just talking about which areas are covered doesn’t really give much of a feeling of the book itself. However, the writing is very similar to Joel’s blog posts – so if you enjoy those, I’d say you’re likely to enjoy the book. (I don’t agree with all of what Joel says on his blog, I should say – but whether or not I agree with his point, his writing style generally appeals to me.) 

The Bad

Okay, it’s not perfect – but my main gripe isn’t really the book’s fault. It was written in 2001, and has certainly dated when it comes to web access. I would love to see a second edition written right now, touching on the pros and cons (from a user perspective instead of a technical one) of more modern web applications. One topic which should have had more attention paid to it even back in 2001 is accessibility

Finally, I do wish Joel would be slightly kinder towards the programmers behind the interfaces he disparages. This may seem a little odd coming from someone who has made a point of being brutally honest when it comes to low opinions of books, products, articles etc – but occasionally this book goes too far, in my view. Negativity can form constructive criticism, and when it’s directed at the user interfaces themselves, everything’s fine – but when it’s aimed at the developers it just doesn’t feel right.

Conclusion

If you’re in the target market for the book, it really is excellent. Just don’t get it expecting technical tricks or detailed discussion about each individual control.

Critical dead-weight

We’re all familiar with the idea of a technology achieving critical mass: having enough users (subscribers, customers, whatever the appropriate metric might be) to keep it alive and useful. This morning I was considering the idea of critical dead-weight: having enough users etc to keep the technology alive long past its natural lifetime.

Examples of technologies we might like to kill

  • SMTP: I suspect that completely preventing spam and other abuses (while maintaining a lot of the benefits we currently enjoy) would be difficult even with a modern protocol design, but the considerations on a messaging system created today would be completely different to those used to conceive SMTP.
  • NNTP: I still prefer using a dedicated newsreader for newsgroups instead of the kind of web forum which seems fairly pervasive these days. The simple support for offline reading and deferred posting, the natural threading (including in the face of missing articles) and the nature of purpose-built applications all appeal to me. However, like SMTP there are various concerns which just weren’t considered in the original design.
  • HTML: In some ways HTML itself isn’t the biggest problem I see here (although making the markup language itself stricter to start with might have helped) – it’s the fact that browsers have always supported broken HTML. There are numbers which are often produced during discussions of browser (and particularly renderer) implementations to say just what proportion of browser code is dedicated to displaying invalid HTML in a reasonably pleasant way. I don’t recall the exact figures, and I suspect many are pulled out of thin air, but it’s a problem nonetheless. Folks who know more about the world of content itself are in a better position to comment on the core usefulness of HTML.
  • HTTP: Okay, this one is slightly tenuous. There are definitely bits of HTTP which could have been more sensibly defined (I seem to recall that the character encoding used when handling encoded bits of URL such as %2F etc is poorly specified, for example) but there are bigger issues at stake. The main thing is to consider whether the “single request, single response” model is really the most appropriate one for the modern web. It makes life more scalable in many ways, but even so it has various downsides. 
  • IPv4: This is one area where we already have a successor: IPv6. However, we’ve seen that the transition to IPv6 is happening at a snail’s pace, and there is already much criticism of this new standard, even before most of us have got there. I don’t profess to understand the details of the debate, but I can see why there is concern about the speed of change.
  • Old APIs (in Windows, Java etc): I personally feel that many vocal critics of Windows don’t take the time to appreciate how hard it is to maintain backwards compatibility to the level that Microsoft manages. This is not to say they do a perfect job, but I understand it’s a pretty nightmarish task to design a new OS when you’re so constrained by history. (I’ve read rumours that Windows 7 will tackle backward compatibility in a very different way, meaning that to run fully natively vendors will have to recompile. I guess this is similar to how Apple managed OS X, but I don’t know any details or even whether the rumours are accurate.) Similarly Java has hundreds or thousands of deprecated methods now – and .NET has plenty, too. At least there is a road towards planned obsolescence on both platforms, but it takes a long time to reach fruition. (How many of the deprecated Java APIs have actually been removed?)
  • Crufty bits of programming languages: Language designers aren’t perfect. It would be crazy to expect them to be able to look back 5, 10, 15 years later and say “Yes, I wouldn’t change anything in the original design.” I’ve written before about my own view of C# language design mistakes, and there are plenty in Java as well (more, in fact). Some of these can be deprecated by IDEs – for instance, Eclipse can warn you if you try to use a static member through a variable, as if it were an instance variable. However, it’s still not as nice as having a clean language to work with. Again, backward compatibility is a pain…

Where is the dead-weight?

There are a two slightly different kinds of dead-weight here. The first is a communications issue: if two people currently use a certain protocol to communicate (e.g. SMTP) then in most cases both parties need to change to a particular new option before all or sometimes any of its advantages can be seen.

The other issue can be broadly termed backward compatibility. I see this as slightly different to the communications issue, even though that can cover some of the same bases (where one protocol is backwardly compatible with another, to some extent). The core problem here is “We’ve got a lot of stuff for the old version” where stuff can be code, content, even hardware. The cost of losing all of that existing stuff is usually greater (at least in the short to medium term) than the benefits of whatever new model is being proposed.

What can be done?

This is where I start running out of answers fast. Obviously having a transition plan is important – IPv6 is an example where it at least appears that the designers have thought about how to interoperate with IPv4 networks. However, it’s another example of the potential cost of doing so – just how much are you willing to compromise an ideal design for the sake of a simplified transition? Another example would be generics: Java generics were designed to allow the existing collection classes to have generics retrofitted without backward compatibility issues, and without requiring a transition to new actual classes. The .NET approach was very different – ArrayList and List<T> certainly aren’t interchangable, for example – but this allows for (in my view) a more powerful design of generics in .NET.

There are some problems which are completely beyond my sight at the moment. I can’t imagine SMTP being replaced in the next 5 years, for instance – which means its use is likely to grow rather than shrink (although probably not across the board,  demographically speaking). Surely that means in 5 years time it’ll be even further away from replacement. However, I find it very hard to imagine that humankind will still be using SMTP in 200 years. It would be pretty sad for us if that were to be the case, certainly. I find myself considering the change to be inevitable and inconceivable, at the same time.

Some technologies are naturally replaceable – or can gradually begin to gather dust without that harming anyone. But should we pay more attention to the end of a technology’s life right from the beginning? How can we design away from technological lock-in? In particular, can we do so while still satisfying the business analysts who tend to like the idea of locking users in? Open formats and protocols etc are clearly part of the consideration, but I don’t think they provide the whole picture.

Transition is often painful for users, and it’s almost always painful to implement too. It’s a natural part of life, however – isn’t it time we got better at it?

Google, here I come!

This may be a somewhat unexpected announcement for many of you, but I’m delighted to announce that as of April 7th I will be an employee at Google. (If you really needed to follow that link to know who Google are, I have no idea what you’re doing reading my blog in the first place.)

This may seem an unusual move for someone who has been concentrating on C# for a while – but I view it as a once-in-a-lifetime opportunity to work with some of the smartest engineers around on hugely exciting projects used by billions of people. Strangely enough, at the moment I don’t really know how to build an application which supports billions of users. I’m looking forward to finding out.

This is likely to mean an end or at least a temporary hiatus in my professional use of C# – but that doesn’t mean my interest in it will die out. I’m still looking forward to seeing what’s in C# 4 :) I’m likely to be using Java for my day-to-day development, which is at least familiar ground, and as Josh Bloch works at Google I’ll be in good company! (Do you think he’d trade a copy of the new edition of Effective Java for a copy of C# in Depth?)

I’ll be working in the London office, but will spend the first two weeks in Mountain View. I don’t yet know what I’ll be working on, but many of the projects in London are in the mobile space, so that seems a reasonable possibility. Whatever project I end up on (and it’s likely to change reasonably frequently) it’s hard to imagine that life will be dull.

It seems fitting to thank my wife Holly at this point for supporting me in this – my daily commute will be significantly longer when I’m at Google, which means she’ll be doing even more of the childcare, not to mention coping on her own while I’m in sunny California. She’s been a complete rock and never once complained about the extra burden I’ll be putting on her.

So, I’m currently a mixture of terrified and extremely excited – and I can’t wait to fly out on Sunday…

We’ve shipped! C# in Depth is out now (ebook)

I’m hugely pleased to announce that C# in Depth is now available in finished form as an ebook. The hard copy will ship in about three weeks. Thanks to everyone who’s been involved, particularly the folks from Manning, Eric Lippert (for both the tech review and the very kind comments!) and all the peer reviewers. Oh, and Holly for putting up with my lack of spare time over the last year :)

The work isn’t over yet, of course… I’ve still got to write up the specification map on the book’s web site, and I’ll probably end up writing various articles for magazines etc for marketing purposes. Still, a very significant milestone!

I really hope to write another book at some point – but I think I’ll be taking a few months off first…