I’ve always been aware that .NET supports multiple languages (obviously) and that Microsoft has been experimenting with this to some extent. It’s only recently struck me just to what extent this is the case though.
Here’s a list – almost certainly incomplete – of .NET languages from Microsoft alone.
- C#
- VB (or VB.NET if you wish)
- C++/CLI
- F#
- IronPython
- IronRuby
- Spec#
- M (with Oslo)
- Axum
- Managed JScript
- PowerShell
- Cω
- J# (not shipping any more, I believe)
Some of these are research languages which are more important for the ideas they’ve contributed to more mainstream ones at a later date than for anything else – but there’s still a lot of effort represented in the list.
In addition, there are third party languages targeting .NET, such as Boo, IronScheme and Scala. (Wikipedia lists loads of them.)
Now, think back to the time before .NET. Was Microsoft actively experimenting with languages back then? Plenty of people were trying things against the JVM, but Sun was pretty much absent from that party. .NET seems to be a "missing ingredient" that has allowed smart folk at Microsoft to let their imaginations loose in ways which they couldn’t previously. (Of course, not everyone in the language business at MS started there: Jim Hugunin was hired by Microsoft precisely because of his work on IronPython.)
I wonder how long this will continue.
Tower of Babel, or land of polyglots?
What does this mean for the average developer? Currently, if you’re writing a non-web application in .NET, you really only need to know a single language – and any of them will do. (Plus potentially SQL of course…) Compare this with web developers who have to be intimately familiar with HTML, CSS and JavaScript – and the differences between various implementations.
How long will it be before backend developers are expected to know a dynamic language, a static OO language and a functional language? Does the benefit of mixing several languages in a project worth the impedance mismatch and the increased skillset requirements? I’m not going to make any predictions on that front – I can certainly see the benefits of each of these approaches in certain situations. They’ve been designed to play well together, but there are bound to be limitations and oddities: times when you need to change how you write your F# so that it’s easily callable from C#, for example.
Whether or not you learn multiple languages to a professional level is one thing, but becoming familiar with them is a different matter. In the course of co-authoring Functional Programming for the Real World (where "co-author" is a bit of a stretch title – I’ve played more of an editorial role really, with the added bonus of picking on Tomas whenever I felt he was perhaps a little harsh towards C#) I’ve learned to appreciate many of F#’s qualities, but I don’t really know the language. If someone asked me to write a complete application in it (rather than just a toy experiment) I’d be reaching for books every other minute. I hope I’ll learn more over the course of time, but I doubt that I’ll ever be sufficiently experienced in it to put it on my CV. The same goes for IronPython, although I’m considerably more likely to need Python at work than I am F#. (Python is one of the three "approved" languages at Google, along with Java and C++.) None of this means that time spent in these languages is wasted: I’ll be able to apply a lot of what I’ve learned about F# to my C# coding, even if it will make me pine for things like pattern matching and asynchronous workflows periodically.
I think it’s pretty much a given that these days we all need to bring a wide range of technologies to bear in most jobs. While it used to be just about feasible in the .NET 1.1 days to have a pretty good grasp of all the major aspects (ASP.NET for sites and web services, ADO.NET, WinForms, Windows services, class libraries, interop) it’s just impossible these days. We learn something new when we need to – but usually against the background of a familiar language. How well would we cope if we had to learn whole new languages (to the level of being able to use them for production code) as often as we have to learn new libraries?
This worries me a little. I’m pleased to see that C# 4 is a much smaller change than the previous versions were. Admittedly I’d rather have had immutability support than dynamic, but that’s just me… and that’s the problem, too. While I worry about our ability to actually learn everything that’s becoming available, it’s all good stuff. Can there be "too much of a good thing"?
What I really don’t want to see is developers having to know multiple languages, and everyone knowing them poorly. I’m a big believer in having a thorough understanding of your language, so that even if everything else is new, you can rely on your understanding of that aspect of your code. It would be a shame if the pressure of knowing many languages turned many of us into cargo cult programmers. The utopia would be for us all to turn into language renaissance developers. I suspect the reality will be somewhere between the two.
Still, as long as I get to keep helping authors write about languages I know almost nothing about, I’m sure I’ll be happy…