All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see https://toggl.com/programming-princess for the full original

Recent activities

It’s been a little while since I’ve blogged, and quite a lot has been going on. In fact, there are a few things I’d have blogged about already if it weren’t for “things” getting in the way.

Rather than writing a whole series of very short blog posts, I thought I’d wrap them all up here…

C# in Depth: next MEAP drop available soon – Code Contracts

Thanks to everyone who gave feedback on my writing dilemma. For the moment, the plan is to have a whole chapter about Code Contracts, but not include a chapter about Parallel Extensions. My argument for making this decision is that Code Contracts really change the feel of the code, making it almost like a language feature – and its applicability is almost ubiquitous, unlike PFX.

I may write a PFX chapter as a separate download, but I’m sensitive to those who (like me) appreciate slim books. I don’t want to “bulk out” the book with extra topics.

The Code Contracts chapter is in the final stages before becoming available to MEAP subscribers. (It’s been “nearly ready” for a couple of weeks, but I’ve been on holiday, amongst other things.) After that, I’m going back to the existing chapters and revising them.

Talking in Dublin – C# 4 and Parallel Extensions

Last week I gave two talks in Dublin at Epicenter. One was on C# 4, and the other on Code Contracts and Parallel Extensions. Both are now available in a slightly odd form on the Talks page of the C# in Depth web site. I no longer write “formal” PowerPoint slides, so the downloads are for simple bullet points of text, along with silly hand-drawn slides. No code yet – I want to tidy it up a bit before including it.

Podcasting with The Connected Show

I recently recorded a podcast episode with The Connected Show. I’m “on” for the second 2/3 of the show – about an hour of me blathering on about the new features of C# 4. If you can understand generic variance just by listening to me talking about it, you’re a smart cookie ;)

(Oh, and if you like it, please express your amusement on Digg / DZone / Shout / Kicks.)

Finishing up with Functional Programming for the Real World

Well, this hasn’t been taking much of my time recently (I bowed out of all the indexing etc!) but Functional Programming for the Real World is nearly ready to go. Hard copy should be available in the next couple of months… it’ll be really nice to see how it fares. Much kudos to Tomas for all his hard work – I’ve really just been helping out a little.

Starting on Groovy in Action, 2nd edition

No sooner does one book finish than another one starts. The second edition of Groovy in Action is in the works, which should prove interesting. To be honest, I haven’t played with Groovy much since the first edition of the book was finished, so it’ll be interesting to see what’s happened to the language in the meantime. I’ll be applying the same sort of spit and polish that I did in the first edition, and asking appropriately ignorant questions of the other authors.

Tech Reviewing C# 4.0 in a Nutshell

I liked C# 3.0 in a Nutshell, and I feel honoured that Joe asked me to be a tech reviewer for the next edition, which promises to be even better. There’s not a lot more I can say about it at the moment, other than it’ll be out in 2010 – and I still feel that C# in Depth is a good companion book.

MoreLINQ now at 1.0 beta

A while ago I started the MoreLINQ project, and it gained some developers with more time than I’ve got available :) Basically the idea is to add some more useful LINQ extension methods to LINQ to Object. Thanks to Atif Aziz, the first beta version has been released. This doesn’t mean we’re “done” though – just that we think we’ve got something useful. Any suggestions for other operators would be welcome.

Manning Pop Quiz and discounts

While I’m plugging books etc, it’s worth mentioning the Manning Pop Quiz – multiple choice questions on a wide variety of topics. Fabulous prizes available, as well as one-day discounts:

  • Monday, Sept 7th: 50% of all print books (code: pop0907)
  • Monday, Sept 14: 50% off all ebooks  (code: pop0914)
  • Thursday, Sept 17: $25 for C# in Depth, 2nd Edition MEAP print version (code: pop0917) + C# Pop Quiz question
  • Monday, Sept 21: 50% off all books  (code: pop0921)
  • Thursday, Sept 24: $12 for C# in Depth, 2nd Edition MEAP ebook (code: pop0924) + another C# Pop Quiz question

Future speaking engagements

On September 16th I’m going to be speaking to Edge UG (formerly Vista Squad) in London about Code Contracts and Parallel Extensions. I’m already very much looking forward to the Stack Overflow DevDays London conference on October 28th, at which I’ll be talking about how humanity has screwed up computing.

Future potential blog posts

Some day I may get round to writing about:

  • Revisiting StaticRandom with ThreadLocal<T>
  • Volatile doesn’t mean what I thought it did

There’s a lot more writing than coding in that list… I’d like to spend some more time on MiniBench at some point, but you know what deadlines are like.

Anyway, that’s what I’ve been up to and what I’ll be doing for a little while…

The “dream book” for C# and .NET

This morning I showed my hand a little on Twitter. I’ve had a dream for a long time about the ultimate C# book. It’s a dream based on Effective Java, which is my favourite Java book, along with my experiences of writing C# in Depth.

Effective Java is written by Josh Bloch, who is an absolute giant in the Java world… and that’s both the problem and the opportunity. There’s no-one of quite the equivalent stature in the .NET world. Instead, there are many very smart people, a lot of whom blog and some of whom have their own books.

There are "best practices" books, of course: Microsoft’s own Framework Design Guidelines, and Bill Wagner’s Effective C# and More Effective C# being the most obvious examples. I’m in no way trying to knock these books, but I feel we could do even better. The Framework Design Guidelines (also available free to browse on MSDN) are really about how to create a good API – which is important, but not the be-all-and-end-all for many application developers who aren’t trying to ship a reusable class library and may well have different concerns. They want to know how to use the language most effectively, as well as the core types within the framework.

Bill’s books – and many others which cover the core framework, such as CLR via C#, Accelerated C# 2008 and C# 3.0 in a Nutshell – give plenty of advice, but often I’ve felt it’s a little one-sided. Each of these books is the work of a single person (or brothers in the case of Nutshell). Reading them, I’ve often wanted to give present a different point of view – or alternatively, to give a hearty "hear, hear." I believe that a book giving guidance would benefit greatly from being more of a conversation: where the authors all agree on something, that’s great; where they differ, it would be good to hear about the pros and cons of various approaches. The reader can then weigh up those factors as they apply to each particular real-world scenario.

Scope

So what would such a book contain? Opinions will vary of course, but I would like to see:

  • Effective ways of using language features such as lambda expressions, generic type inference (and indeed generics in general), optional parameters, named arguments and extension methods. Assume that the reader knows roughly what C# does, but give some extra details around things like iterator blocks and anonymous functions.
  • Guidance around class design (in a similar fashion to the FDG, but with more input from others in the community)
  • Core framework topics (again, assume the basics are understood):
    • Resource management (disposal etc)
    • Exceptions
    • Collections (including LINQ fundamentals)
    • Streams
    • Text (including internationalization)
    • Numeric types
    • Time-related APIs
    • Concurrency
    • Contracts
    • AppDomains
    • Security
    • Performance

I would prefer to avoid anything around the periphery of .NET (WPF, WinForms, ASP.NET, WCF) – I believe those are better handled in different topics.

Obstacles and format

There’s one big problem with this idea, but I think it may be a saving grace too. Many of the leading authors work for different publishers. Clearly no single publisher is going to attract all the best minds in the C# and .NET world. So how could this work in practice? Well…

Imagine a web site for the book, paid for jointly by all interested publishers. The web site would be the foremost delivery mechanism for the content, both to browse and probably to download in formats appropriate for offline reading (PDF etc). The content would be edited in a collaborative style obviously, but exactly how that would work is a detail to be thrashed out. If you’ve read the annotated C# or CLI specifications, they have about the right feel – opinions can be attributed in places, but not everything has a label.

Any contributing publisher could also take the material and publish it as hard copy if they so wished. Quite how this would work – with potentially multiple hard copy editions of the same content – would be interesting to see. There’s another reason against hard copy ever appearing though, which is that it would be immovable. I’d like to see this work evolve as new features appear and as more best practices are discovered. Publishers could monetize the web site via adverts, possibly according to how much they’re kicking into the site.

I don’t know how the authors would get paid, admittedly, and that’s another problem. Would this cannibalize the sales of the books listed earlier? It wouldn’t make them redundant – certainly not for the Nutshell type of book, which teaches the basics as well as giving guidance. It would hit Effective C# harder, I suspect – and I apologise to Bill Wagner in advance; if this ever takes off and it hurts his bottom line, I’m very sorry – I think it’s in a good cause though.

Dream Team

So who would contribute to this? Part of me would like to say "anyone and everyone" in a Wikipedia kind of approach – but I think that practically, it makes sense for industry experts to take their places. (A good feedback/comments mechanism for anyone to use would be crucial, however.) Here’s a list which isn’t meant to be exhaustive, but would make me happy – please don’t take offence if your name isn’t on here but should be, and I wouldn’t expect all of these people to be interested anyway.

  • Anders Hejlsberg
  • Eric Lippert
  • Mads Torgersen
  • Don Box
  • Brad Abrams
  • Krzysztof Cwalina
  • Joe Duffy
  • Vance Morrison
  • Rico Mariani
  • Erik Meijer
  • Don Symes
  • Wes Dyer
  • Jeff Richter
  • Joe and Ben Albahari
  • Andrew Troelsen
  • Bill Wagner
  • Trey Nash
  • Mark Michaelis
  • Jon Skeet (yeah, I want to contribute if I can)

I imagine "principal" authors for specific topics (e.g. Joe Duffy for concurrency) but with all the authors dropping in comments in other places too.

Dream or reality?

I have no idea whether this will ever happen or not. I’d dearly love it to, and I’ve spoken to a few people before today who’ve been encouraging about the idea. I haven’t been putting any work into getting it off the ground – don’t worry, it’s not been delaying the second edition of C# in Depth. One day though, one day…

Am I being hopelessly naïve to even consider such a venture? Is the scope too broad? Is the content valuable but not money-making? We’ll see.

Tricky decisions… Code Contracts and Parallel Extensions in C# in Depth 2nd edition

I’d like some feedback from readers, and I suspect my blog is the simplest way to get it.

I’m currently writing chapter 15 of C# in Depth, tentatively about Code Contracts and Parallel Extensions. The problem is that I’m 15 pages in, and I haven’t finished Code Contracts yet. I suspect that with a typesetter moving the listings around a little it can be shortened a little bit, but I’m still concerned. With the amount I’ve still got to write, Code Contracts is going to end up at 20 pages and I expect Parallel Extensions may be 25. That makes for a pretty monstrous chapter for non-language features.

I’d like to present a few options:

  1. Keep going as I am, and take the hit of having a big chapter. I’m not going into huge amounts of detail anyway, but the bigger point is to demonstrate how code isn’t what it used to be. We’re no longer writing a simple series of statements to be executed in order. Code Contracts changes this dramatically with the binary rewriter, and Parallel Extensions adjusts the parallelism, and ironically makes it easier to write asynchronous code as if it were executed sequentially.
  2. Try to whittle the material down to my original target of around 35 pages. This means it’ll be a really cursory glance at each of the technologies – I’m unsure of how useful it would be at all at that point.
  3. Don’t even claim to give enough information to really get people going with the new technologies, but possibly introduce extra ones as well, such as PostSharp. Build the theme of "you’re not writing C# 1 any more" in a stronger sense – zoom back to show the bigger picture while ignoring the details.
  4. Separate them into different chapters. At this point half the new chapters would be non-language features, which isn’t great for the focus of the book… but at least they’d be a more reasonable size.
  5. Ditch the chapters from the book completely, possibly writing them as separate chapters to be available as a mini-ebook companion to the book. (We could possibly include them in the ebook version.) This would make the second edition more focused again and possibly give me a bit more space when revising earlier chapters. However, it does mean there’d only be two full-size new chapters for the second edition. (There’ll be a new "wrapping up" chapter as well for a sense of closure, but I’m not generally counting that.)

Other suggestions are welcome, of course. I’m not going to claim that we’ll end up doing whatever is suggested here, but I’m sure that popular opinion will influence the final decision.

Thoughts?

Evil Code of the Day: variance and overloading

(Note that this kind of breakage was mentioned a long time ago in Eric Lippert’s blog, although not in this exact form.)

Whenever a conversion becomes available where it wasn’t before, overload resolution can change its behaviour. From C# 1 to C# 2 this happened due to delegate variance with method group conversions – now the same thing is true for generic variance for interfaces.

What does the following code print?

using System;
using System.Collections.Generic;

class Base
{
    public void Foo(IEnumerable<string> strings)
    {
        Console.WriteLine(“Strings”);
    }
}

class Derived : Base
{
    public void Foo(IEnumerable<object> objects)
    {
        Console.WriteLine(“Objects”);
    }
}

class Test
{
    static void Main()
    {
        List<string> strings = new List<string>();
        new Derived().Foo(strings);
    }
}

The correct answer is “it depends on which version of C# and .NET framework you’re using.”

If you’re using C# 4.0 and .NET 4.0, then IEnumerable<T> is covariant: there’s an implicit conversion from IEnumerable<string> to IEnumerable<object>, so the derived overload is used.

If you’re using C# 4.0 but .NET 3.5 or earlier then the compiler still knows about variance in general, but the interface in the framework doesn’t have the appropriate metadata to indicate it, so there’s no conversion available, and the base class overload is used.

If you’re using C# 3.0 or earlier then the compiler doesn’t know about generic variance at all, so again the base class overload is used.

So, this is a breaking change, and a fairly subtle one at that – and unlike the method group conversion in .NET 2.0, the compiler in .NET 4.0 beta 1 doesn’t issue a warning about it. I’ll edit this post when there’s an appropriate Connect ticket about it…

In general though, I’d say it’s worth avoiding overloading a method declared in a base class unless you really have to. In particular, overloading it using the same number of parameters but more general ones seems to be a recipe for unreadable code.

Non-review: The Data Access Handbook by John Goodson and Robert A. Steward

A while ago I agreed to write a review of this book (which the publisher sent me a free copy of) but I haven’t had time to read it fully yet. I’ve been skimming through the first couple of chapters though, and it’s pretty interesting. I’ll post a full review when I have more time (along with reviews of CLR via C# and a bunch of other books) but I thought it would be at least worth mentioning the book in advance.

It’s really a performance book – as far as I can tell that’s its sole purpose (and I’m lumping scalability in with performance) which is fine. It covers some generalities and then splits by client technology (ODBC, JDBC, .NET) for the middle section. The final chapters are general-purpose again.

I’m loathe to say much more about it yet, having only read a small amount – but I’ll definitely be reading the rest. It’s unlikely to be particularly useful in my current job, but you never know – one day I may be talking to a regular SQL database again :)

Links:

Faking COM to fool the C# compiler

C# 4 has some great features to make programming against COM components bearable fun and exciting. In particular:

  • PIA linking allows you to embed just the relevant bits of the Primary Interop Assembly into your own assembly, so the PIA isn’t actually required at execution time
  • Named arguments and optional parameters make life much simpler for APIs like Office which are full of methods with gazillions of parameters
  • "ref" removal allows you to pass an argument by value even though the parameter is a by-reference parameter (COM only, folks – don’t worry!)
  • Dynamic typing allows you to remove a load of casts by converting every parameter and return type of "object" into "dynamic" (if you’re using PIA linking)

I’m currently writing about these features for the book (don’t forget to buy it cheap on Friday) but I’m not really a COM person. I want to be able to see these compiler features at work against a really simple type. Unfortunately, these really are COM-specific features… so we’re going to have to persuade COM that the type really is a COM type.

I got slightly stuck on this first, but thanks to the power of Stack Overflow, I now have a reasonably complete demo "fake" COM type. It doesn’t do a lot, and in particular it doesn’t have any events, but it’s enough to show the compiler features:

using System;
using System.Runtime.InteropServices;

// Required for linking into another assembly (C# 4)
[assembly:Guid("86ca55e4-9d4b-462b-8ec8-b62e993aeb64")]
[assembly:ImportedFromTypeLib("fake.tlb")]

namespace FakeCom
{
    [Guid("c3cb8098-0b8f-4a9a-9772-788d340d6ae0")]
    [ComImport, CoClass(typeof(FakeImpl))]
    public interface FakeComponent
    {
        object MakeMeDynamic(object arg);
        
        void Foo([Optional] ref int x,
                 [Optional] ref string y);
    }
 
    [Guid("734e6105-a20f-4748-a7de-2c83d7e91b04")]
    public class FakeImpl {}
}

We have an interface representing our COM type, and a class which the interface claims will implement it. Fortunately the compiler doesn’t actually check that, so we can get away with leaving it entirely unimplemented. It’s also worth noting that our optional parameters can be by-reference parameters (which you can’t normally do in C# 4) and we haven’t given them any default values (as those are ignored for COM anyway).

This is compiled just like any other assembly:

csc /target:library FakeCom.cs

Then we get to use it with a test program:

using FakeCom;

class Test
{
    static void Main()
    {
        // Yes, that is calling a "constructor" on an interface
        FakeComponent com = new FakeComponent();
        
        // The boring old fashioned way of calling a method
        int i = 0;
        string j = null;
        com.Foo(ref i, ref j);
        
        // Look ma, no ref!
        com.Foo(10, "Wow!");
        
        // Who cares about parameter ordering?
        com.Foo(y: "Not me", x: 0);

        // And the parameters are optional too
        com.Foo();
        
        // The line below only works when linked rather than
        // referenced, as otherwise you need a cast.
        // The compiler treats it as if it both takes and
        // returns a dynamic value.
        string value = com.MakeMeDynamic(10);
    }
}

This is compiled either in the old "deploy the PIA as well" way (after adding a cast in the last line):

csc /r:FakeCom.dll Test.cs

… or by linking the PIA instead:

csc /l:FakeCom.dll Test.cs

(The difference is just using /l instead of /r.)

When the test code is compiled as a reference, it decompiles in Reflector to this (I’ve added whitespace for clarity):

private static void Main()
{
    FakeComponent component = (FakeComponent) new FakeImpl();

    int x = 0;
    string y = null;
    component.Foo(ref x, ref y);

    int num2 = 10;
    string str3 = "Wow!";
    component.Foo(ref num2, ref str3);

    string str4 = "Not me";
    int num3 = 0;
    component.Foo(ref num3, ref str4);

    int num4 = 0;
    string str5 = null;
    component.Foo(ref num4, ref str5);

    string str2 = (string) component.MakeMeDynamic(10);
}

Note how the compiler has created local variables to pass by reference; any changes to the parameter are ignored when the method returns. (If you actually pass a variable by reference, the compiler won’t take that away, however.)

When the code is linked instead, the middle section is the same, but the construction and the line calling MakeMeDynamic are very different:

private static void Main()
{
    FakeComponent component = (FakeComponent) Activator.CreateInstance(Type.GetTypeFromCLSID
        (new Guid("734E6105-A20F-4748-A7DE-2C83D7E91B04")));

    // Middle bit as before

    if (<Main>o__SiteContainer6.<>p__Site7 == null)
    {
        <Main>o__SiteContainer6.<>p__Site7 = CallSite<Func<CallSite, object, string>>
            .Create(new CSharpConvertBinder
                       (typeof(string), 
                        CSharpConversionKind.ImplicitConversion, false));
    }
    string str2 = <Main>o__SiteContainer6.<>p__Site7.Target.Invoke
        (<Main>o__SiteContainer6.<>p__Site7, component.MakeMeDynamic(10));
}

The interface is embedded in the generated assembly, but with a slightly different set of attributes:

[ComImport, CompilerGenerated]
[Guid("C3CB8098-0B8F-4A9A-9772-788D340D6AE0"), TypeIdentifier]
public interface FakeComponent
{
    object MakeMeDynamic(object arg);
    void Foo([Optional] ref int x, [Optional] ref string y);
}

The class isn’t present at all.

I should point out that doing this has no practical benefit in real code – but the ability to mess around with a pseudo-COM type rather than having to find a real one with the exact members I want will make it a lot easier to try a few corner cases for the book.

So, not a terribly productive evening in terms of getting actual writing done, but interesting nonetheless…

Books going cheap

I’m delighted to say that Manning is having a promotional week, with a one-day discount voucher on each of the books I’ve been working on. Here’s the list of what’s going cheap when:

This seems an appropriate time to mention that the first new content from the 2nd edition of C# in Depth became available under MEAP over the weekend. I’m looking forward to getting feedback on it.

I’ll be tweeting the relevant code each morning as well. Go nuts :)

Evil code of the day

At a glance, this code doesn’t look particularly evil. What does it do though? Compile it with the C# 4.0b1 compiler and run it…

using System;

class Base
{
    public virtual void Foo(int x, int y)
    {
        Console.WriteLine("Base: x={0}, y={1}", x, y);
    }
}

class Derived : Base
{
    public override void Foo(int y, int x)
    {
        Console.WriteLine("Derived: x={0}, y={1}", x, y);
    }
}

class PureEvil
{
    static void Main()
    {
        Derived d = new Derived();
        Base b = d;
        
        b.Foo(x: 10, y: 20);
        d.Foo(x: 10, y: 20);
    }
}

The results are:

Derived: x=20, y=10
Derived: x=10, y=20

I’m very nearly tempted to leave it there and just see what the reactions are like, but I’ll at least give you a hint as to where to look – section 21.3 of the C# 4 spec explains why this gives odd results. It does make perfect sense, but it’s hideously evil.

I feel dirty.

Bonus questions

  • What happens if you rename the parameters in Derived.Foo to yy and xx?
  • (As suggested by Mehrdad) What happens if you call it with a dynamic value?

OS Jam at Google London: C# 4 and the DLR

Last night I presented for the first time at the Google Open Source Jam at our offices in London. The room was packed, but only a very few attendees were C# developers. I know that C# isn’t the most popular language on the Open Source scene, but I was still surprised there weren’t more people using C# for their jobs and hacking on Ruby/Python/etc at night.

All the talks at OSJam are just 5 minutes long, with 2 minutes for questions. I’m really not used to this format, and felt extremely rushed… however, it was still a lot of fun. I used a somewhat different approach to my slides than the normal “bullet points in PowerPoint” – and as it was only short, I thought I might as well effectively repeat the presentation here in digital form. (Apologies if the images are an inconvenient size for you. I tried a few different ones, and this seemed about right. Comments welcome, as I may do a similar thing in the future.)

First slide

Introductory slide. Colleagues forced me to include the askjonskeet.com link.

Second slide

.NET isn’t Open Source. You can debug through a lot of the source code for the framework if you agree to a “reference licence”, but it’s not quite the same thing.

Third slide

.NET isn’t Open Source, but the DLR is. And IronRuby. And IronPython. Yay!

And of course Mono is Open Source: the DLR and Mono play nicely together, and the Mono team is hoping to implement the new C# 4.0 features for the 2.8 release in roughly the same timeframe as Microsoft.

Fourth slide

This is what .NET 4.0 will look like. The DLR will be included in it, despite being open source. IronRuby and IronPython aren’t included, but depend heavily on the DLR. (Currently available versions allow you to use a “standalone” DLR or the one in .NET 4.0b1.)

C# doesn’t really depend on the DLR except for its handling of dynamic. C# is a statically typed language, but C# 4.0 has a new static type called dynamic which you can do just about anything with. (This got a laugh, despite being a simple and mostly accurate summary of the dynamic typing support in C# 4.0.)

Fifth slide

The fundamental point of the DLR is to handle call sites – decide what to do dynamically with little bits of code. Oh, and do it quickly. That’s what the caches are for. They’re really clever – particularly the L0 cache which compiles rules (about the context in which a particular decision is valid) into IL via dynamic methods. Awesome stuff.

I’m sure the DLR does many other snazzy things, but this feels like it’s the core part of it.

Sixth slide

At execution time, the relevant binder is used to work out what a call site should actually do. Unless, that is, the call has a target which implements the shadowy IDynamicMetaObjectProvider interface (winner of “biggest mouthful of a type name” prize, 2009) – in which case, the object is asked to handle the call. Who knows what it will do?

Seventh slide

Beautifully syntax-highlighted C# 4.0 source code showing the dynamic type in action. The method calls on lines 2 and 3 are both dynamic, even though in the latter case it’s just using a static method. Which overload will it pick? It all depends on the type of the actual value at execution time.

If I’d had more time, I’d have demonstrated how the C# compiler preserves the static type information it knows at compile time for the execution time binder to use. This is very cool, but would take far too long to demonstrate in this talk – especially to a bunch of non-C# developers.

Eighth slide There were a couple of questions, but I can’t remember them offhand. Someone asked me afterwards about how all this worked on non-.NET implementations (i.e. Mono, basically). I gather the DLR itself works, but I don’t know whether C# code compiled in the MS compiler will work at the moment – it embeds references to binder types in Microsoft.CSharp.dll, and I don’t know what the story is about that being supported on Mono.

This is definitely the format I want to use for future presentations. It’s fun to write, fun to present, and I’m sure the “non-professionalism” of it makes it a lot more interesting to watch. Although it’s slower to create text-like slides (such as the first and the last one) this way, the fact that I don’t need to find clip-art or draw boxes with painful user interfaces is a definite win – especially as I’m going to try to be much more image-biased from now on. (I don’t want people reading slides while I’m talking – they should be listening, otherwise it’s just pointless.)

Dynamic type inference and surprising possibilities

There have been mutterings about the fact that I haven’t been blogging much recently. I’ve been getting down to serious work on the second edition of C# in Depth, and it’s taking a lot of my time. However, I thought I’d share a ghastly little example I’ve just come up with.

I’ve been having an email discussion with Sam Ng, Chris Burrows and Eric Lippert about how dynamic typing works. Sam mentioned that even for dynamically bound calls, type inference can fail at compile time. This can only happen for type parameters where none of the dynamic values contribute to the type inference. For example, this fails to compile:

static void Execute<T>(T item, int value) where T : struct {}

dynamic guid = Guid.NewGuid();
Execute("test", guid);

Whatever the value of guid is at execution time, this can’t possibly manage to infer a valid type argument for T. The only type which can be inferred is string, and that’s not a value type. Fair enough… but what about this one?

static void Execute<T>(T first, T second, T third) where T : struct {}

dynamic guid = Guid.NewGuid();
Execute(10, 0, guid);
Execute(10, false, guid);
Execute("hello", "hello", guid);

I expected the first call to compile (but fail at execution time) and the second and third calls to fail at compile time. After all, T couldn’t be both an int and a bool could it? And then I remembered implicit typing… what if the vaue of guid isn’t actually a Guid, but some struct which has an implicit conversion from int, bool and string? In other words, what if the full code actually looked like this:

using System;

public struct Foo
{
    public static implicit operator Foo(int x)
    {
        return new Foo();
    }

    public static implicit operator Foo(bool x)
    {
        return new Foo();
    }

    public static implicit operator Foo(string x)
    {
        return new Foo();
    }
}

class Test
{
    static void Execute<T>(T first, T second, T third) where T : struct {}

    static void Main()
    {
        dynamic foo = new Foo();
        Execute(10, 0, foo);
        Execute(10, false, foo);
        Execute("hello", "hello", foo);
    }
}

Then T=Foo is a perfectly valid inference. So yes, it all compiles – and the C# binders even get it all right at execution time. So much for any intuition I might have about dynamic typing and inference…

No doubt I’ll have similar posts about new C# 4 features occasionally… but they’re more likely to be explanations of misunderstandings than deep insights into a correct view of the language. Those end up in the book instead :)