Category Archives: C#

MVP no more

It’s with some sadness that I have to announce that as of the start of October, I’m no longer a Microsoft MVP.

As renewal time came round again, I asked my employer whether it was okay for me to renew, and was advised not to do so. As a result, while I enjoyed being awarded as an MVP, I’ve asked not to be considered for renewal this year.

This doesn’t mean I’m turning my back on that side of software development, of course. I’m still going to be an active member of the C# community. I’m still writing the second edition of C# in Depth. I’m still going to post on Stack Overflow. I’m still going to blog here about whatever interesting and wacky topics crop up.

I just won’t be doing so as an MVP.

Thanks to all the friends I’ve made in the MVP community and Microsoft over the last 6 years, and I wish you all the best.

Keep in touch.

An object lesson in blogging and accuracy; was: Efficient “vote counting” with LINQ to Objects – and the value of nothing

Well, this is embarrassing.

Yesterday evening, I excitedly wrote a blog post about an interesting little idea for making a particular type of LINQ query (basically vote counting) efficient. It was an idea that had occurred to me a few months back, but I hadn’t got round to blogging about it.

The basic idea was to take a completely empty struct, and use that as the element type in the results of a grouping query – as the struct was empty, it would take no space, therefore "huge" arrays could be created for no cost beyond the fixed array overhead, etc. I carefully checked that the type used for grouping did in fact implement ICollection<T> so that the Count method would be efficient; I wrote sample code which made sure my queries were valid… but I failed to check that the empty struct really took up no memory.

Fortunately, I have smart readers, a number of whom pointed out my mistake in very kind terms.

Ben Voigt gave the reason for the size being 1 in a comment:

The object identity rules require a unique address for each instance… identity can be shared with super- or sub- class objects (Empty Base Optimization) but the total size of the instance has to be at least 1.

This makes perfect sense – it’s just a shame I didn’t realise it before.

Live and learn, I guess – but apologies for the poorly researched post. I’ll attempt to be more careful next time.

API design: choosing between non-ideal options

So, UnconstrainedMelody is coming on quite nicely. It now has quite a few useful options for flags enums, "normal enums" and delegates. However, there are two conflicting limitations which leave a couple of options. (Other related answers on Stack Overflow have suggested alternative approaches, basically.)

Currently, most of the enums code is in two classes: Flags and Enums. Both are non-generic: the methods within them are generic methods, so they have type parameters (and constraints). The main benefit of this is that generic type inference only applies to generic methods, and I definitely want that for extension methods and anywhere else it makes sense.

The drawback is that properties can’t be generic. That means my API is entirely expressed in terms of methods, which can be a pain. The option to work around this is to have a generic type which properties in. This adds confusion and guesswork – what call is where?

To recap, the options are:

// Option 1 (current): all methods in a nongeneric class:
// Some calls which are logically properties end up
// as methods…
IList<Foo> foos = Enums.GetValues<Foo>();
// Type infererence for extenion methods
// Note that we couldn’t have a Description property
// as we don’t have extension properties
string firstDescription = foos[0].GetDescription();
        
// Option 2: Use just a generic type:
// Now we can use a property…
IList<Foo> foos = Enums<Foo>.Values;
// But we can’t use type inference
string firstDescription = Enums<Foo>.GetDescription(foos[0]);
        
// Option 3: Use a mixture (Enums and Enums<T>):
IList<Foo> foos = Enums<Foo>.Values;
// All looks good…
string firstDescription = foos[0].GetDescription();
// … but the user has to know when to use which class

All of these are somewhat annoying. If we only put extension methods into the nongeneric class, then I guess users would never need to really think about that – they’d pretty much always be calling the methods via the extension method syntactic sugar anyway. It still feels like a pretty arbitrary split though.

Any thoughts? Which is more important – conceptual complexity, or the idiomatic client code you end up with once that complexity has been mastered? Is it reasonable to make design decisions like this around what is essentially a single piece of syntactic sugar (extension methods)?

(By the way, if anyone ever wanted justification for extension properties, I think this is a good example… Description feels like it really should be a property.)

Generic constraints for enums and delegates

As most readers probably know, C# prohibits generic type constraints from referring to System.Object, System.Enum, System.Array, System.Delegate and System.ValueType. In other words, this method declaration is illegal:

public static T[] GetValues<T>() where T : struct, System.Enum
{
    return (T[]) Enum.GetValues(typeof(T));
}

This is a pity, as such a method could be useful. (In fact there are better things we can do… such as returning a read-only collection. That way we don’t have to create a new array each time the method is called.) As far as I can tell, there is no reason why this should be prohibited. Eric Lippert has stated that he believes the CLR doesn’t support this – but I think he’s wrong. I can’t remember the last time I had cause to believe Eric to be wrong about something, and I’m somewhat nervous of even mentioning it, but section 10.1.7 of the CLI spec (ECMA-335) partition II (p40) specifically gives examples of type parameter constraints involving System.Delegate and System.Enum. It introduces the table with "The following table shows the valid combinations of type and special constraints for a representative set of types." It was only due to reading this table that I realized that the value type constraint on the above is required (or a constructor constraint would do equally well) – otherwise System.Enum itself satisfies the constraint, which would be a Bad Thing.

It’s possible (but unlikely) that the CLI doesn’t fully implement this part of the CLR spec. I’m hoping that Eric’s just wrong on this occasion, and that actually there’s nothing to stop the C# language from allowing such constraints in the future. (It would be nice to get keyword support, such that a constraint of "T : enum" would be equivalent to the above, but hey…)

The good news is that ilasm/ildasm have no problem with this. The better news is that if you add a reference to a library which uses those constraints, the C# compiler applies them sensibly, as far as I can tell…

Introducing UnconstrainedMelody

(Okay, the name will almost surely have to change. But I like the idea of it removing the constraints of C# around which constraints are valid… and yet still being in the key of C#. Better suggestions welcome.)

I have a plan – I want to write a utility library which does useful things for enums and delegates (and arrays if I can think of anything sensible to do with them). It will be written in C#, with methods like this:

public static T[] GetValues<T>() where T : struct, IEnumConstraint
{
    return (T[]) Enum.GetValues(typeof(T));
}

(IEnumConstraint has to be an interface of course, as otherwise the constraint would be invalid.)

As a post-build step, I will:

  • Run ildasm on the resulting binary
  • Replace every constraint using EnumConstraint with System.Enum
  • Run ilasm to build the binary again

If anyone has a simple binary rewriter (I’ve looked at PostSharp and CCI; both look way more complicated than the above) which would do this, that would be great. Otherwise ildasm/ilasm will be fine. It’s not like consumers will need to perform this step.

As soon as the name is finalized I’ll add a project on Google Code. Once the infrastructure is in place, adding utility methods should be very straightforward. Suggestions for utility methods would be useful, or just join the project when it’s up and running.

Am I being silly? Have I overlooked something?

A couple of hours later…

Okay, I decided not to wait for a better name. The first cut – which does basically nothing but validate the idea, and the fact that I can still unit test it – is in. The UnconstrainedMelody Google Code project is live!

Recent activities

It’s been a little while since I’ve blogged, and quite a lot has been going on. In fact, there are a few things I’d have blogged about already if it weren’t for “things” getting in the way.

Rather than writing a whole series of very short blog posts, I thought I’d wrap them all up here…

C# in Depth: next MEAP drop available soon – Code Contracts

Thanks to everyone who gave feedback on my writing dilemma. For the moment, the plan is to have a whole chapter about Code Contracts, but not include a chapter about Parallel Extensions. My argument for making this decision is that Code Contracts really change the feel of the code, making it almost like a language feature – and its applicability is almost ubiquitous, unlike PFX.

I may write a PFX chapter as a separate download, but I’m sensitive to those who (like me) appreciate slim books. I don’t want to “bulk out” the book with extra topics.

The Code Contracts chapter is in the final stages before becoming available to MEAP subscribers. (It’s been “nearly ready” for a couple of weeks, but I’ve been on holiday, amongst other things.) After that, I’m going back to the existing chapters and revising them.

Talking in Dublin – C# 4 and Parallel Extensions

Last week I gave two talks in Dublin at Epicenter. One was on C# 4, and the other on Code Contracts and Parallel Extensions. Both are now available in a slightly odd form on the Talks page of the C# in Depth web site. I no longer write “formal” PowerPoint slides, so the downloads are for simple bullet points of text, along with silly hand-drawn slides. No code yet – I want to tidy it up a bit before including it.

Podcasting with The Connected Show

I recently recorded a podcast episode with The Connected Show. I’m “on” for the second 2/3 of the show – about an hour of me blathering on about the new features of C# 4. If you can understand generic variance just by listening to me talking about it, you’re a smart cookie ;)

(Oh, and if you like it, please express your amusement on Digg / DZone / Shout / Kicks.)

Finishing up with Functional Programming for the Real World

Well, this hasn’t been taking much of my time recently (I bowed out of all the indexing etc!) but Functional Programming for the Real World is nearly ready to go. Hard copy should be available in the next couple of months… it’ll be really nice to see how it fares. Much kudos to Tomas for all his hard work – I’ve really just been helping out a little.

Starting on Groovy in Action, 2nd edition

No sooner does one book finish than another one starts. The second edition of Groovy in Action is in the works, which should prove interesting. To be honest, I haven’t played with Groovy much since the first edition of the book was finished, so it’ll be interesting to see what’s happened to the language in the meantime. I’ll be applying the same sort of spit and polish that I did in the first edition, and asking appropriately ignorant questions of the other authors.

Tech Reviewing C# 4.0 in a Nutshell

I liked C# 3.0 in a Nutshell, and I feel honoured that Joe asked me to be a tech reviewer for the next edition, which promises to be even better. There’s not a lot more I can say about it at the moment, other than it’ll be out in 2010 – and I still feel that C# in Depth is a good companion book.

MoreLINQ now at 1.0 beta

A while ago I started the MoreLINQ project, and it gained some developers with more time than I’ve got available :) Basically the idea is to add some more useful LINQ extension methods to LINQ to Object. Thanks to Atif Aziz, the first beta version has been released. This doesn’t mean we’re “done” though – just that we think we’ve got something useful. Any suggestions for other operators would be welcome.

Manning Pop Quiz and discounts

While I’m plugging books etc, it’s worth mentioning the Manning Pop Quiz – multiple choice questions on a wide variety of topics. Fabulous prizes available, as well as one-day discounts:

  • Monday, Sept 7th: 50% of all print books (code: pop0907)
  • Monday, Sept 14: 50% off all ebooks  (code: pop0914)
  • Thursday, Sept 17: $25 for C# in Depth, 2nd Edition MEAP print version (code: pop0917) + C# Pop Quiz question
  • Monday, Sept 21: 50% off all books  (code: pop0921)
  • Thursday, Sept 24: $12 for C# in Depth, 2nd Edition MEAP ebook (code: pop0924) + another C# Pop Quiz question

Future speaking engagements

On September 16th I’m going to be speaking to Edge UG (formerly Vista Squad) in London about Code Contracts and Parallel Extensions. I’m already very much looking forward to the Stack Overflow DevDays London conference on October 28th, at which I’ll be talking about how humanity has screwed up computing.

Future potential blog posts

Some day I may get round to writing about:

  • Revisiting StaticRandom with ThreadLocal<T>
  • Volatile doesn’t mean what I thought it did

There’s a lot more writing than coding in that list… I’d like to spend some more time on MiniBench at some point, but you know what deadlines are like.

Anyway, that’s what I’ve been up to and what I’ll be doing for a little while…

The “dream book” for C# and .NET

This morning I showed my hand a little on Twitter. I’ve had a dream for a long time about the ultimate C# book. It’s a dream based on Effective Java, which is my favourite Java book, along with my experiences of writing C# in Depth.

Effective Java is written by Josh Bloch, who is an absolute giant in the Java world… and that’s both the problem and the opportunity. There’s no-one of quite the equivalent stature in the .NET world. Instead, there are many very smart people, a lot of whom blog and some of whom have their own books.

There are "best practices" books, of course: Microsoft’s own Framework Design Guidelines, and Bill Wagner’s Effective C# and More Effective C# being the most obvious examples. I’m in no way trying to knock these books, but I feel we could do even better. The Framework Design Guidelines (also available free to browse on MSDN) are really about how to create a good API – which is important, but not the be-all-and-end-all for many application developers who aren’t trying to ship a reusable class library and may well have different concerns. They want to know how to use the language most effectively, as well as the core types within the framework.

Bill’s books – and many others which cover the core framework, such as CLR via C#, Accelerated C# 2008 and C# 3.0 in a Nutshell – give plenty of advice, but often I’ve felt it’s a little one-sided. Each of these books is the work of a single person (or brothers in the case of Nutshell). Reading them, I’ve often wanted to give present a different point of view – or alternatively, to give a hearty "hear, hear." I believe that a book giving guidance would benefit greatly from being more of a conversation: where the authors all agree on something, that’s great; where they differ, it would be good to hear about the pros and cons of various approaches. The reader can then weigh up those factors as they apply to each particular real-world scenario.

Scope

So what would such a book contain? Opinions will vary of course, but I would like to see:

  • Effective ways of using language features such as lambda expressions, generic type inference (and indeed generics in general), optional parameters, named arguments and extension methods. Assume that the reader knows roughly what C# does, but give some extra details around things like iterator blocks and anonymous functions.
  • Guidance around class design (in a similar fashion to the FDG, but with more input from others in the community)
  • Core framework topics (again, assume the basics are understood):
    • Resource management (disposal etc)
    • Exceptions
    • Collections (including LINQ fundamentals)
    • Streams
    • Text (including internationalization)
    • Numeric types
    • Time-related APIs
    • Concurrency
    • Contracts
    • AppDomains
    • Security
    • Performance

I would prefer to avoid anything around the periphery of .NET (WPF, WinForms, ASP.NET, WCF) – I believe those are better handled in different topics.

Obstacles and format

There’s one big problem with this idea, but I think it may be a saving grace too. Many of the leading authors work for different publishers. Clearly no single publisher is going to attract all the best minds in the C# and .NET world. So how could this work in practice? Well…

Imagine a web site for the book, paid for jointly by all interested publishers. The web site would be the foremost delivery mechanism for the content, both to browse and probably to download in formats appropriate for offline reading (PDF etc). The content would be edited in a collaborative style obviously, but exactly how that would work is a detail to be thrashed out. If you’ve read the annotated C# or CLI specifications, they have about the right feel – opinions can be attributed in places, but not everything has a label.

Any contributing publisher could also take the material and publish it as hard copy if they so wished. Quite how this would work – with potentially multiple hard copy editions of the same content – would be interesting to see. There’s another reason against hard copy ever appearing though, which is that it would be immovable. I’d like to see this work evolve as new features appear and as more best practices are discovered. Publishers could monetize the web site via adverts, possibly according to how much they’re kicking into the site.

I don’t know how the authors would get paid, admittedly, and that’s another problem. Would this cannibalize the sales of the books listed earlier? It wouldn’t make them redundant – certainly not for the Nutshell type of book, which teaches the basics as well as giving guidance. It would hit Effective C# harder, I suspect – and I apologise to Bill Wagner in advance; if this ever takes off and it hurts his bottom line, I’m very sorry – I think it’s in a good cause though.

Dream Team

So who would contribute to this? Part of me would like to say "anyone and everyone" in a Wikipedia kind of approach – but I think that practically, it makes sense for industry experts to take their places. (A good feedback/comments mechanism for anyone to use would be crucial, however.) Here’s a list which isn’t meant to be exhaustive, but would make me happy – please don’t take offence if your name isn’t on here but should be, and I wouldn’t expect all of these people to be interested anyway.

  • Anders Hejlsberg
  • Eric Lippert
  • Mads Torgersen
  • Don Box
  • Brad Abrams
  • Krzysztof Cwalina
  • Joe Duffy
  • Vance Morrison
  • Rico Mariani
  • Erik Meijer
  • Don Symes
  • Wes Dyer
  • Jeff Richter
  • Joe and Ben Albahari
  • Andrew Troelsen
  • Bill Wagner
  • Trey Nash
  • Mark Michaelis
  • Jon Skeet (yeah, I want to contribute if I can)

I imagine "principal" authors for specific topics (e.g. Joe Duffy for concurrency) but with all the authors dropping in comments in other places too.

Dream or reality?

I have no idea whether this will ever happen or not. I’d dearly love it to, and I’ve spoken to a few people before today who’ve been encouraging about the idea. I haven’t been putting any work into getting it off the ground – don’t worry, it’s not been delaying the second edition of C# in Depth. One day though, one day…

Am I being hopelessly naïve to even consider such a venture? Is the scope too broad? Is the content valuable but not money-making? We’ll see.

Tricky decisions… Code Contracts and Parallel Extensions in C# in Depth 2nd edition

I’d like some feedback from readers, and I suspect my blog is the simplest way to get it.

I’m currently writing chapter 15 of C# in Depth, tentatively about Code Contracts and Parallel Extensions. The problem is that I’m 15 pages in, and I haven’t finished Code Contracts yet. I suspect that with a typesetter moving the listings around a little it can be shortened a little bit, but I’m still concerned. With the amount I’ve still got to write, Code Contracts is going to end up at 20 pages and I expect Parallel Extensions may be 25. That makes for a pretty monstrous chapter for non-language features.

I’d like to present a few options:

  1. Keep going as I am, and take the hit of having a big chapter. I’m not going into huge amounts of detail anyway, but the bigger point is to demonstrate how code isn’t what it used to be. We’re no longer writing a simple series of statements to be executed in order. Code Contracts changes this dramatically with the binary rewriter, and Parallel Extensions adjusts the parallelism, and ironically makes it easier to write asynchronous code as if it were executed sequentially.
  2. Try to whittle the material down to my original target of around 35 pages. This means it’ll be a really cursory glance at each of the technologies – I’m unsure of how useful it would be at all at that point.
  3. Don’t even claim to give enough information to really get people going with the new technologies, but possibly introduce extra ones as well, such as PostSharp. Build the theme of "you’re not writing C# 1 any more" in a stronger sense – zoom back to show the bigger picture while ignoring the details.
  4. Separate them into different chapters. At this point half the new chapters would be non-language features, which isn’t great for the focus of the book… but at least they’d be a more reasonable size.
  5. Ditch the chapters from the book completely, possibly writing them as separate chapters to be available as a mini-ebook companion to the book. (We could possibly include them in the ebook version.) This would make the second edition more focused again and possibly give me a bit more space when revising earlier chapters. However, it does mean there’d only be two full-size new chapters for the second edition. (There’ll be a new "wrapping up" chapter as well for a sense of closure, but I’m not generally counting that.)

Other suggestions are welcome, of course. I’m not going to claim that we’ll end up doing whatever is suggested here, but I’m sure that popular opinion will influence the final decision.

Thoughts?

Evil Code of the Day: variance and overloading

(Note that this kind of breakage was mentioned a long time ago in Eric Lippert’s blog, although not in this exact form.)

Whenever a conversion becomes available where it wasn’t before, overload resolution can change its behaviour. From C# 1 to C# 2 this happened due to delegate variance with method group conversions – now the same thing is true for generic variance for interfaces.

What does the following code print?

using System;
using System.Collections.Generic;

class Base
{
    public void Foo(IEnumerable<string> strings)
    {
        Console.WriteLine(“Strings”);
    }
}

class Derived : Base
{
    public void Foo(IEnumerable<object> objects)
    {
        Console.WriteLine(“Objects”);
    }
}

class Test
{
    static void Main()
    {
        List<string> strings = new List<string>();
        new Derived().Foo(strings);
    }
}

The correct answer is “it depends on which version of C# and .NET framework you’re using.”

If you’re using C# 4.0 and .NET 4.0, then IEnumerable<T> is covariant: there’s an implicit conversion from IEnumerable<string> to IEnumerable<object>, so the derived overload is used.

If you’re using C# 4.0 but .NET 3.5 or earlier then the compiler still knows about variance in general, but the interface in the framework doesn’t have the appropriate metadata to indicate it, so there’s no conversion available, and the base class overload is used.

If you’re using C# 3.0 or earlier then the compiler doesn’t know about generic variance at all, so again the base class overload is used.

So, this is a breaking change, and a fairly subtle one at that – and unlike the method group conversion in .NET 2.0, the compiler in .NET 4.0 beta 1 doesn’t issue a warning about it. I’ll edit this post when there’s an appropriate Connect ticket about it…

In general though, I’d say it’s worth avoiding overloading a method declared in a base class unless you really have to. In particular, overloading it using the same number of parameters but more general ones seems to be a recipe for unreadable code.

Faking COM to fool the C# compiler

C# 4 has some great features to make programming against COM components bearable fun and exciting. In particular:

  • PIA linking allows you to embed just the relevant bits of the Primary Interop Assembly into your own assembly, so the PIA isn’t actually required at execution time
  • Named arguments and optional parameters make life much simpler for APIs like Office which are full of methods with gazillions of parameters
  • "ref" removal allows you to pass an argument by value even though the parameter is a by-reference parameter (COM only, folks – don’t worry!)
  • Dynamic typing allows you to remove a load of casts by converting every parameter and return type of "object" into "dynamic" (if you’re using PIA linking)

I’m currently writing about these features for the book (don’t forget to buy it cheap on Friday) but I’m not really a COM person. I want to be able to see these compiler features at work against a really simple type. Unfortunately, these really are COM-specific features… so we’re going to have to persuade COM that the type really is a COM type.

I got slightly stuck on this first, but thanks to the power of Stack Overflow, I now have a reasonably complete demo "fake" COM type. It doesn’t do a lot, and in particular it doesn’t have any events, but it’s enough to show the compiler features:

using System;
using System.Runtime.InteropServices;

// Required for linking into another assembly (C# 4)
[assembly:Guid("86ca55e4-9d4b-462b-8ec8-b62e993aeb64")]
[assembly:ImportedFromTypeLib("fake.tlb")]

namespace FakeCom
{
    [Guid("c3cb8098-0b8f-4a9a-9772-788d340d6ae0")]
    [ComImport, CoClass(typeof(FakeImpl))]
    public interface FakeComponent
    {
        object MakeMeDynamic(object arg);
        
        void Foo([Optional] ref int x,
                 [Optional] ref string y);
    }
 
    [Guid("734e6105-a20f-4748-a7de-2c83d7e91b04")]
    public class FakeImpl {}
}

We have an interface representing our COM type, and a class which the interface claims will implement it. Fortunately the compiler doesn’t actually check that, so we can get away with leaving it entirely unimplemented. It’s also worth noting that our optional parameters can be by-reference parameters (which you can’t normally do in C# 4) and we haven’t given them any default values (as those are ignored for COM anyway).

This is compiled just like any other assembly:

csc /target:library FakeCom.cs

Then we get to use it with a test program:

using FakeCom;

class Test
{
    static void Main()
    {
        // Yes, that is calling a "constructor" on an interface
        FakeComponent com = new FakeComponent();
        
        // The boring old fashioned way of calling a method
        int i = 0;
        string j = null;
        com.Foo(ref i, ref j);
        
        // Look ma, no ref!
        com.Foo(10, "Wow!");
        
        // Who cares about parameter ordering?
        com.Foo(y: "Not me", x: 0);

        // And the parameters are optional too
        com.Foo();
        
        // The line below only works when linked rather than
        // referenced, as otherwise you need a cast.
        // The compiler treats it as if it both takes and
        // returns a dynamic value.
        string value = com.MakeMeDynamic(10);
    }
}

This is compiled either in the old "deploy the PIA as well" way (after adding a cast in the last line):

csc /r:FakeCom.dll Test.cs

… or by linking the PIA instead:

csc /l:FakeCom.dll Test.cs

(The difference is just using /l instead of /r.)

When the test code is compiled as a reference, it decompiles in Reflector to this (I’ve added whitespace for clarity):

private static void Main()
{
    FakeComponent component = (FakeComponent) new FakeImpl();

    int x = 0;
    string y = null;
    component.Foo(ref x, ref y);

    int num2 = 10;
    string str3 = "Wow!";
    component.Foo(ref num2, ref str3);

    string str4 = "Not me";
    int num3 = 0;
    component.Foo(ref num3, ref str4);

    int num4 = 0;
    string str5 = null;
    component.Foo(ref num4, ref str5);

    string str2 = (string) component.MakeMeDynamic(10);
}

Note how the compiler has created local variables to pass by reference; any changes to the parameter are ignored when the method returns. (If you actually pass a variable by reference, the compiler won’t take that away, however.)

When the code is linked instead, the middle section is the same, but the construction and the line calling MakeMeDynamic are very different:

private static void Main()
{
    FakeComponent component = (FakeComponent) Activator.CreateInstance(Type.GetTypeFromCLSID
        (new Guid("734E6105-A20F-4748-A7DE-2C83D7E91B04")));

    // Middle bit as before

    if (<Main>o__SiteContainer6.<>p__Site7 == null)
    {
        <Main>o__SiteContainer6.<>p__Site7 = CallSite<Func<CallSite, object, string>>
            .Create(new CSharpConvertBinder
                       (typeof(string), 
                        CSharpConversionKind.ImplicitConversion, false));
    }
    string str2 = <Main>o__SiteContainer6.<>p__Site7.Target.Invoke
        (<Main>o__SiteContainer6.<>p__Site7, component.MakeMeDynamic(10));
}

The interface is embedded in the generated assembly, but with a slightly different set of attributes:

[ComImport, CompilerGenerated]
[Guid("C3CB8098-0B8F-4A9A-9772-788D340D6AE0"), TypeIdentifier]
public interface FakeComponent
{
    object MakeMeDynamic(object arg);
    void Foo([Optional] ref int x, [Optional] ref string y);
}

The class isn’t present at all.

I should point out that doing this has no practical benefit in real code – but the ability to mess around with a pseudo-COM type rather than having to find a real one with the exact members I want will make it a lot easier to try a few corner cases for the book.

So, not a terribly productive evening in terms of getting actual writing done, but interesting nonetheless…

Evil code of the day

At a glance, this code doesn’t look particularly evil. What does it do though? Compile it with the C# 4.0b1 compiler and run it…

using System;

class Base
{
    public virtual void Foo(int x, int y)
    {
        Console.WriteLine("Base: x={0}, y={1}", x, y);
    }
}

class Derived : Base
{
    public override void Foo(int y, int x)
    {
        Console.WriteLine("Derived: x={0}, y={1}", x, y);
    }
}

class PureEvil
{
    static void Main()
    {
        Derived d = new Derived();
        Base b = d;
        
        b.Foo(x: 10, y: 20);
        d.Foo(x: 10, y: 20);
    }
}

The results are:

Derived: x=20, y=10
Derived: x=10, y=20

I’m very nearly tempted to leave it there and just see what the reactions are like, but I’ll at least give you a hint as to where to look – section 21.3 of the C# 4 spec explains why this gives odd results. It does make perfect sense, but it’s hideously evil.

I feel dirty.

Bonus questions

  • What happens if you rename the parameters in Derived.Foo to yy and xx?
  • (As suggested by Mehrdad) What happens if you call it with a dynamic value?