Category Archives: Speaking engagements

Presentation preparation

As most of you know, I occasionally talk about C# at conferences, user groups or basically anywhere that people won’t attack me. A while ago I rejected PowerPoint in favour of a rather less formal approach: hand-drawn slides. Quite a few people have now asked me about how they’re prepared – even occasionally making the assumption that my awful handwriting is actually a real font – so I figured it’s worth a blog post.

My process is both primitive and complex…

1. Draw the slides

I use plain A4 printer paper and a black flipchart marker to draw the slides. Where I need one bit overlaid on top of another (e.g. to cross something out), I’ll draw the two sections separately, to merge later. Given that I’m fiddling around anyway, it doesn’t matter if there are extra bits on the paper – so if I make a small mistake, I’ll often keep going on the same sheet.

I rarely make a textual plan of my presentations beforehand – I think of roughly where I want to go, and then just draw. Often some slides will be reordered or deleted for the final presentation. I find a good fat pen in my hand helps the creative process.

2. Scan the paper

I have a relatively ancient but still trusty flatbed scanner (a Xerox 2400, if you’re interested) which I use to scan everything in. A document scanner would be more convenient – scanning 25 slides individually is very tedious – but I don’t really have the space at the moment.

I scan in greyscale at 400dpi, into Paintshop Pro v8.10. It may be 7 years old, but it does the job perfectly adequately. I then re-orient them to landscape mode and save them as BMP files. Yes, it’s as daft as it sounds, but necessary for the next step. These files are pretty big – about 15MB – and large in terms of canvas size too… but here’s an idea of what they look like (but as a PNG, scaled down significantly):

Slide before alterations

At this point I’m done with Paintshop Pro, so I close it down…

3. Convert into SVG

I used to edit the slides as PNG files in Paintshop Pro, but this had various issues:

  • Colouring was a pain
  • Resizing didn’t work terribly well
  • Cutting and pasting shapes wasn’t as easy as it might have been

For line drawing like mine, I suspect SVG is the ideal format. I found InkScape as a free tool which works pretty well… but I need to convert the images first. InkScape does actually have a conversion tool built-in, but I’ve never quite got it to work properly – whereas using potrace directly (I believe InkScape uses potrace internally) means I can batch the conversions and control them as far as I need to.

When I discovered potrace, I was amazed by it. It really does exactly what it says on the tin – it converts bitmaps into SVGs. Unfortunately it’s limited in terms of its input formats – hence the BMPs – but that’s fine.

For the record, here’s the potrace command I use:

potrace -b svg -r 400 -t 8 -O 0.4 -k 0.7 %*

I can’t remember what all the arguments mean offhand, but you can find out if you want to :)

4. Edit in InkScape

So, now I have black and white SVGs which I can edit in InkScape. I generally perform the following tweaks:

  • Remove any crud left from my dirty scanner – often there are extra shapes which need deleting
  • Resize everything and rotate shapes – usually to straighten them
  • Apply colour; I usually stick to red, black, blue and green with occasional other colours where necessary. A bit of colour’s great, but we’re not going for works of art here… and many colours look rubbish on some projectors.

Here’s the result of editing the above picture – assuming your browser can handle SVG:

Slide as an SVG

5. Convert to PNG

So now we’re done, right? Well, no… because I haven’t found any way of presenting nicely from SVGs. Aside from anything else, they take a long time to render, and I really need to be able to flick from slide to slide whenever I want to. However, I’ve found that PNGs work well as a display format. I use ImageMagick to convert from SVG to PNG usually… although very occasionally it crashes, and I have to use InkScape to export to PNG, which it’s perfectly capable of doing. (I use ImageMagick mostly so that I can do it easily in a batch.)

I convert to a 1024×768 (-ish) PNG, so that it won’t be too time-consuming to render, and I don’t need to worry about scaling it. (Most projectors I’ve used have been that size.) Here’s the PNG for the same image again – shrunk to 600×436 just to avoid taking up too much room:

Final converted slide

Even if you couldn’t see the SVG before, you should now be able to spot that not only is the image coloured in (and with a lot better contrast than the original) but I’ve tweaked the man to be falling instead of flying. (The aim of the slide is to indicate that C# should push you into the pit of success.)

6. Present!

At this point, I have three directories: one of the original BMPs, one of the SVGs, and one of the PNGs. I could then create a PowerPoint presentation consisting of all of the images… but I’ve found that PowerPoint takes an awfully long time to render images, and it would also be a bit of a pain to add them in the first place.

Instead, I use FastStone Image Viewer to present a slideshow of the images. I’ve tweaked a few options to make the images appear nice and quickly against a white background. It’s easiest if the image files are in alphabetical order, so I tend to go for filenames such as "010 Introduction.png" – always iniitially leaving 10 between each number, so that I can easily slot in an extra slide or two if necessary.

So that’s my process… four pieces of free software and one very old commercial one, a fair amount of time, and a slightly odd result.

Looking to the future…

Obviously this isn’t an ideal process. I now have a Wacom Bamboo Fun tablet, and the hope is that eventually I’ll be able to draw with it directly into InkScape, cutting out huge amounts of the process. However, so far I’m completely inept with it. If you think my drawings are bad and my handwriting is ridiculous when I’m using a pen and paper, you should see it with the tablet… but maybe that will change. I’d like to think so.

I suspect my style may change over time too, although I’m not sure in what way. Different presentations already get different treatment – for some talks I work entirely in Visual Studio, occasionally breaking to talk without coding, but without any slides… for others I’ll have slides but no code at all. If you ever happen to see me present though, please give me feedback afterwards – good or bad. It’s an area I’d love to improve on, and feedback is the best way of that happening.

Reflecting on presentation skills: The Guathon, August 13th 2010

(I apologise for the unstructured nature of this post. I honestly don’t know how to structure it. I’ve thought of a few ways of breaking it up by heading, and none of them really work. Particular apologies to Simon Stewart, who has requested more brevity in my blog. Just for Simon, the executive summary is: Scott Guthrie is a really good speaker. I want to be more like him.)

Yesterday I had the good fortune (well, good friends – thanks Phil!) to attend the Guathon in London. This was a free, day-long event with Scott Guthrie and Mike Ormond, talking about MVC 2 and 3, Visual Studio 2010, Windows Phone 7 and more. This was my encounter with Scott – and indeed the first time I’d seen him present. (I value videos of presentations, but rarely find time to actually watch them, more’s the pity.)

Obviously I was interested in hearing about the technologies they were talking about, but I confess I was more interested in watching how Scott went about presenting. (I’ve seen Mike present before – but clearly Scott was the "big name" here. No offence meant to Mike whatsoever, who did a great job talking about Windows Phone 7.) Scott is a legend in the industry, and as I’m very interested in improving my public speaking skills, I felt I had at least as much to learn in that area as anything else.

I was really impressed. In some ways, Scott didn’t present in a way I’d expected him to… but what he did was so much better. Not having seen him before, I’d sort of expected an utterly polished sort of talk – almost like a Steve Jobs presentation. I was hoping to get some insight into what sort of polish I could add to my presentations: where does it make sense to have photos, where do simpler visuals work, where are words important? How do you present against an enormous screen without losing the audience’s focus? Do jokes enhance a presentation or detract from it?

In retrospect, this was hopelessly naïve. I think Scott’s secret sauce is actually pretty simple: he knows what he’s talking about, and talks about it honestly and openly. He’s completely authentic, obviously passionate about what he does, good humoured (we had a few bits of mild Google/Bing banter), and interested in the audience.

At almost every turn, Scott asked the audience how many of us had used a certain feature, or developed in a certain way. This was then reflected in the level at which he pitched the next section, as well as giving a few opportunities for jokes. There were questions throughout – particularly in Mike’s talk, actually – to the extent that I’d say a good quarter to a third of the time was spent answering the audience. This was a very good thing, in my view – I can’t remember finding any of the questions irrelevant or obvious (I should state for the record that I probably asked more questions than anyone else; apologies if other attendees found my questions to be boring). Questions from the audience are always a good reality check, because they’re clearly addressing real concerns rather than the ones in the speaker’s imagination. But the best thing about the questions was Scott’s way of answering, which could broadly be divided into three types of answer:

  • A known answer: "Yes, you can do X – and you can do Y as well. But you can’t do Z."
  • An unknown answer which was easily testable: "Hmm. I’m not sure. Let’s try it. Ah yes, the code does X." (There were fewer of these, just due to the nature of the questions.)
  • An answer which was unknown but needed further investigation: "Send me a mail and I’ll get back to you about it."

The last one is most interesting – because I have absolutely no doubt that Scott will get back to anyone who sent him a mail. (I’ve sent him two.) Now don’t forget that Scott is a Corporate Vice President (Dev Div). He’s clearly a busy man… but his openness and passion make an enormously positive impression, suggesting that he’s the kind of guy who doesn’t think of himself as being above such questions. Assuming this is what he says at all his presentations (and I suspect it is), I dread to think how much time he spends every day answering emails… but I also suspect that it’s of enormous benefit to the products for which he’s responsible, by keeping the executive level in touch with grass-roots developers.

So, what have I learned from the whole experience, in terms of presentation skills?

  • You can definitely give awesome presentations without fancy graphics. Content is king.
  • There’s no substitute for knowing your stuff, and being honest about when you don’t know the answer.
  • Interaction with the audience is beneficial to everyone.
  • Sitting down and just writing out code – particularly with audience participation to make the demo "belong" to them – is absolutely fine.
  • Scott’s an incredibly nice guy, and it shines through very clearly. I really hope to see him again soon.
  • If you speak clearly, speed doesn’t matter too much: Scott talks really fast, but is very easy to listen to.
  • If you lose a vital file in the middle of a presentation, check the recycle bin. It’s the virtual equivalent of checking down the back of the sofa.
  • Don’t worry if you have more material than you have time to present, particularly if that’s due to audience questions.

Whether I’ll be able to apply this myself remains to be seen… although I’ve already been acutely aware of how much more comfortable I am when presenting on "home topics" (e.g. C# language features) than areas where I have a lot less expertise (e.g. Reactive Extensions).

Degrees of reality in sample code

Yesterday I tweeted a link to an article about overloading that I’d just finished. In that article, all my examples look a bit like this:

using System;

class Test
{
    static void Foo(int x, int y = 5)
    {
        Console.WriteLine("Foo(int x, int y = 5)");
    }
    
    static void Foo(double x)
    {
        Console.WriteLine("Foo(double x)");
    }

    static void Main()
    {
        Foo(10);
    }
}

Each example is followed by an explanation of the output.

Fairly soon afterwards, I received an email from a reader who disagreed with my choices for sample code. ere are a few extracts from the email exchange. Please read them carefully – they really form the context of the rest of this post.

This is really not proper. When a method can do more than one thing, you might offer what are called ‘convenience overloads’, which make it easier for the consuming developer. When you start swaying away so much that you have wildly different arguments, then it’s probably time to refactor and consider creating a second method. With your example with "Foo", it’s hard to tell which is the case.

My point is, the ‘convenience overloads’ should all directly or indirectly call the one REAL method. I’m not a fan of "test", "foo", and "bar", because they rarely make the point clearer, and often make it more confusing. So let me use something more realistic. So let me use something more realistic. This nonsensical example, but hopefully is clear:

...

The point here was to make you aware of the oversight. I do what I can to try to stop bad ideas from propagating, particularly now that you're writing books. When developers read your book and consider it an "authority" on the topic, they take your example as if it's a model for what they should do. I just hope your more mindful of that in your code samples in the future.

...

Specific to this overload issue, this has come up many times for me. Developers will write 3 overloads that do wildly different things or worse, will have 98% of the same code repeated. We try to catch this in a code review, but sometimes we will get pushback because they read it in a book (hence, my comments).

...

I assume your audience is regular developer, right? In other words, the .NET Framework developers at Microsoft perhaps aren't the ones reading your books, but it's thousands of App Developer I and App Developer II that do business development? I just mean that there are far, far more "regular developers" than seasoned, expert developers who will be able to discern the difference and know what is proper. You are DEFINING what is proper in your book, you become an authority on the matter!

Anyhow, all my point was it to realize how far your influence goes once you become an author. Even the simplest, throwaway example can be seen as a best-practice beacon.

Now, this gave me pause for thought. Indeed, I went back and edited the overloading article - not to change the examples, but to make the article's scope clearer. It's describing the mechanics of overloading, rather than suggesting when it is and isn't appropriate to use overloading at all.

I don't think I'm actually wrong here, but I wanted to explore it a little more in this post, and get feedback. First I'd like to suggest a few categorizations - these aren't the only possible ones, of course, but I think they divide the spectrum reasonably. Here I'll give example examples in another area: overriding and polymorphism. I'll just describe the options first, and then we can talk about the pros and cons afterwards.

Totally abstract - no code being presented at all

Sometimes we talk about code without actually giving any examples at all. In order to override a member, it has to be declared as `virtual` in a base class, and then the overriding member uses the `override `modifier. When the virtual member is called, it is dispatched to the most specific implementation which overrides it, even if the caller is unaware of the existence of the implementation class.

Working but pointless code

This is the level my overloading article worked at. Here, you write code whose sole purpose is to demonstrate the mechanics of the feature you're describing. So in this case we might have:

using System;

public class C1
{
    public virtual void M()
    {
        Console.WriteLine("C1.M");
    }
}

public class C2 : C1
{
    public override void M()
    {
        Console.WriteLine("C2.M");
    }
}

public class C3
{
    static void Main()
    {
        C1 c = new C2();
        c.M();
    }
}

Now this is a reasonably extreme example; as a matter of personal preference I tend to use class names like "Test" or "Program" as the entry point, perhaps "BaseClass" and "DerivedClass" where "C1" and "C2" are used here, and "Foo" instead of "M" for the method name. Obviously "Foo" has no more real meaning than "M" as a name - I just get uncomfortable for some reason around single character identifiers other than for local variables. Arguably "M" is better as it stands for "method" and I could use "P" for a property etc. Whatever we choose, we're talking about metasyntactic variables really.

Complete programs indicative of design in a non-business context

This is the level at which I would probably choose to demonstrate overriding. It's certainly the one I've used for talking about generic variance. Here, the goal is to give the audience a flavour of the purpose of the feature as well as demonstrating the mechanics, but to stay in the simplistic realm of non-business examples. To adapt one of my normal examples - where I'd actually use an interface instead of an abstract class - we might end up with an example like this:

using System;
using System.Collections.Generic;

public abstract class Shape
{
    public abstract double Area { get; }
}

public class Square : Shape
{
    private readonly double side;
    
    public Square(double side)
    {
        this.side = side;
    }
    
    public override double Area { get { return side * side; } }
}

public class Circle : Shape
{
    public readonly double radius;
    
    public Circle(double radius)
    {
        this.radius = radius;
    }
    
    public override double Area { get { return Math.PI * radius * radius; } }
}

public class ShapeDemo
{
    static void Main()
    {
        List<Shape> shapes = new List<Shape>
        {
            new Square(10),
            new Circle(5)
        };
        foreach (Shape shape in shapes)
        {
            Console.WriteLine(shape.Area);
        }
    }
}

Now these are pretty tame shapes - they don't even have a location. If I were really going to demonstrate an abstract class I might try to work out something I could do in the base class to make it sensibly a non-interface... but at least we're demonstrating the property being overridden.

Business-like partial example

Here we'll use classes which sound like they could be in a real business application... but we won't fill in all the useful logic, or worry about any properties that aren't needed for the demonstation.

using System;
using System.Collections.Generic;

public abstract class Employee
{
    private readonly DateTime joinDate;
    private readonly decimal salary;
    
    // Most employees don't get bonuses any more
    public virtual int BonusPercentage { get { return 0; } }
    
    public decimal Salary { get { return salary; } }
    public DateTime JoinDate { get { return joinDate; } }
    
    public int YearsOfService
    {
        // TODO: Real calculation
        get { return DateTime.Now.Year - joinDate.Year; }
    }
    
    public Employee(decimal salary, DateTime joinDate)
    {
        this.salary = salary;
        this.joinDate = joinDate;
    }
}

public abstract class Manager : Employee
{
    // Managers always get a 15% bonus
    public override int BonusPercentage { get { return 15; } }
}

public abstract class PreIpoContract : Employee
{
    // The old style contracts were really generous
    public override int BonusPercentage
    {
        get { return YearsOfService * 2; }
    }
}

Now this particular code sample won't even compile: we haven't provided the necessary constructors in the derived classes. Note how the employees don't have names, and there are no relationships between employees and their managers, either.

Obviously we could have filled in all the rest of the code, ending up with a complete solution to an imaginary business need. Other examples at this level may well include customers and orders. One interesting thing to note here: admittedly I've only been working in the industry for 16 years, and only 12 years full time, but I don't think I've ever written a Customer or Order class as part of my job.

Full application example

No, I'm not going to provide an example of this. Usually this is the sort of thing which a book might work up to over the course of the complete text, and you'll end up with a wiki, or an e-commerce site, or an indexed library of books with complete web site around it. If you think I'm going to spend days or even weeks coding something like that just for this blog post, you'll be disappointed :)

Anyway, the idea of this is that it does something genuinely useful, and you can easily lift whole sections of it into other projects - or at least the design of it.

Which approach is best?

I'm sure you know what's coming here: it depends. In particular, I believe it depends on:

Your readership

Are they likely to copy and paste your example into production code without further thought? Arguably in that case the first option might be the best: they may not understand it, but at least it means your code won't be injuring a project.

Simply put, didactic code is not production code. The parables in the Bible aren't meant to be gripping stories with compelling characterization: they're meant to make a point. Scales aren't meant to sound like wonderful music: they're meant to help you improve your abilities to make a nice sound when you're playing real music.

The point you're trying to put across

If I'm trying to explain the mechanics of a feature, I find the second option to be useful. The reader doesn't need to try to take in the context of what the code is trying to accomplish, because it's explicitly not trying to do anything of any use. It's just demonstrating how the language or platform behaves in a particular scenario.

If, on the other hand, you're trying to explain a design principle, then the third or fourth options are useful. The third option can also be useful for the mechanics of a feature which is particularly abstract - like generic variance, as I mentioned earlier. That goes somewhere between "complete guide to where this feature should be used" and "no guidance whatsoever" - a sort of "here's a hint at the kind of situation where it could be useful."

If you're trying to explore a technology for fun, I find the third option works very well for that situation too. For example, while looking at Reactive Extensions, I've written programs to:

  • Group lines in a file by length
  • Give the results of a UK general election
  • Simulate the 1998 Brazil vs Norway world cup football match
  • Implement drag and drop using event filtering

None of these is likely to be directly useful in a real business app - but they were more appealing than solely demonstrating a sequence of numbers being generated (although with an appropriate marble diagram generator, that can be quite fun too).

The technology you're demonstrating

This is clearly related to the previous point, but I think it bears a certain amount of separation. I believe that language topics are fairly easily demonstrated with the second and third options. Library topics often deserve a slightly higher level of abstraction - and if you're going to try to demonstrate that a whole platform is worth investing time and energy in, it's useful to have something pretty real-world to show off.

Your time and skills

You know what? I suck at the fourth and fifth options here. I can't remember writing any complete, independent systems as a software engineer, and none of them have been in line-of-business applications anyway. The closest I've come is writing standalone tools which certainly have been useful, but often take shortcuts in terms of design which I wouldn't countenance in other applications. (And yes, I'm sure there's some discussion to be had around that as well, but it's not the point of this article.)

You may think my employee example above was lousy - and I'd agree with you. It's not really a great fit for inheritance, in my view - and the bonus calculation is certainly a dubious way of forcing in some polymorphism. But it was the best I could come up with in the time available to me. This wasn't some attempt to make it appear less worthy than the other options; I really am that bad at coming up with business-like examples. Other authors (by which I mean anyone writing at all, not just book authors) may well have found much better examples, either by spending more time on them, being more experienced with line-of-business apps, or having a better imagination. Or all three.

I'm not too proud to admit the things I suck at :) If I spent many extra hours coming up with examples for everything I write about, I would get a lot less written. I'm doing this in notional "spare time" after all. So even if you would prefer the fourth option over the third, would you rather have that but see less of my (ahem) "wisdom"? Personally I think everyone's better off with me braindumping using examples in forms which I'm better at.

How to read examples

Most of this post has been from the point of view of an author. Briefly, I'd like to suggest what this might mean for readers. The onus is on the author to make this clear, of course, but I think it's worth trying to be actively better readers ourselves.

  • Understand what the author is trying to achieve. Don't assume that every example will fit nicely in your application. Example code often doesn't come with any argument validation or error handling - and very rarely does it have an appropriate set of unit tests. If you're reading about how something works, don't assume that the examples are in any way realistic. They may well be simplified to demonstrate the behaviour as clearly as possible without the extra "fluff" of useful functionality.
  • Think about what may be missing, particularly if the context is an evangelical one. If someone is trying to sell you on a particular technology, then of course they'll try to show it in its best possible light. Where are the pitfalls? Where does it not stack up?
  • Don't assume authority means anything. I was quite happy to take Jeffrey Richter to task on boxing for example. Jeffrey Richter is a fabulous author and clearly a smart cookie, but that doesn't mean he's right about everything... and I really, really don't like the idea of anyone appealing to my supposed abilities to justify some bad decision. Judge any argument on its merits... find out what people think and why they think it, but then see how well their reasoning actually hangs together.

Conclusion

This was always going to be a somewhat biased look at this topic, because I hold a certain viewpoint which is clearly contrary to the one held by the chap who emailed me. That's why I included a reasonable chunk of his emails - to give at least some representation to the alternatives. This post has effectively been a longwinded justification of the form my examples have taken... but does it ring true?

I can't guarantee to change my writing style drastically on this front - at least not quickly - but I would very much appreciate your thoughts on this. I'm reluctant to exaggerate, but I think it may be even more important than working out whether "Jedi" was meant to be plural or singular - and I certainly received a lot of feedback on that topic.

Epicenter 2010: quick plug and concessionary tickets

Just a quick update to mention that I’m speaking at Epicenter 2010 in Dublin on Wednesday, on Noda Time and C# Corner Cases. There are concessionary tickets available, so if you’re on the right landmass, please do come along. Don’t be put off by the fact that I’m speaking – there are some genuinely good speakers too. (Stephen Colebourne will be talking about Joda Time and JSR-310, in a session which I’m personally sad to miss – I’ll be talking about C# at the same time.)

While I’m busy plugging events, I’m also extremely excited about NDC 2010 next week in Oslo. Neal Gafter and Eric Lippert will be doing a C# Puzzler session, Mads Torgersen will be talking about C# 4, I’ll be presenting a wish-list for C# 5, and then all four of us will be doing a Q&A session. Should be heaps of fun. (I’ll also be presenting C# 4’s variance features, and Noda Time again.)

As ever, I’m somewhat late in putting the final touches to all of these talks, so if you’ve got any suggestions for my C# 5 wish-list or any particularly evil corner cases which have caught you out, add them as comments and I’ll try to squeeze ’em in.

Book review: “Confessions of a public speaker” by Scott Berkun

Resources

Introduction

A couple of weeks ago I was giving a presentation on Reactive Extensions at VBUG 4Thought spring conference, and there was an O’Reilly stand. I picked up CLR via C# 3rd edition (I now have all three editions, which is kinda crazy) and I happened to spot this book too.

I’ve been doing a reasonable amount of public speaking recently, with more to come in the near future (and local preaching roughly once a month), so I figure it would probably be a good idea to find out how to actually do it properly instead of bumbling along in the way I’ve been doing so far. This looked as good a starting point as any.

It’s been a while since I’ve had a lot of time for reading, unfortunately – C# in Depth is still sucking my time – but this is a quick read, and I finished it on the plane today. I should point out that I’m currently flying to Seattle for meetings in the Google Kirkland office. The book itself is in the overhead locker, so obviously I could reach it down – but I’d rather not. Surely a book like this should at least largely be judged by the impression it makes on the reader; if I couldn’t find enough to talk about when I only finished it a few hours ago, that would be a bad sign. It does mean that I’m not going to be as concrete in my notes as I would usually be – but that’s probably reasonably appropriate for a non-technical book anyway.

Content

The book covers various different topics, from preparation to delivery and evaluation. The book is clearly divided into chapters – but a lot of the time the topics seem to leak into chapters other than the ones you might expect them to crop up in. If this were a technical book, I would view that as a bad thing – but here, it just worked. In some ways the book mirrors an actual presentation in terms of fluidity, narration and imagery. Sometimes this is explicitly mentioned, but often in retrospect and never in a self-aggrandising manner.

Although steps for designing your overall talk are examined, there’s little guidance on how to design slides themselves: that’s delegated to other books. (I’m reasonably happy with my slide style at the moment, and as it’s somewhat uncommon, it may well not benefit much from “conventional wisdom” – there are plenty of bigger picture things I want to concentrate on improving first, anyway.)

There are suggestions for audience activity – from moving people to the front to make an underpopulated venue feel more friendly, to trying to make the audience actively use what they’ve been told for a better learning experience. While I’d like to think I’m a pretty friendly speaker, I could definitely improve here.

While there are some mentions of financial matters, there’s no discussion of getting an agent involved, or what kind of events are likely to be the most lucrative and so on. There is the recommendation that you either need to be famous or an expert to make money – which sounds like very good advice to me. I have no particular desire to go into this for money (and I think I have to speak for free under my current contract with Google) so this was fine by me.

Anecdotes abound: they’re part of the coverage of pretty much every topic. At the end there’s a whole section of gaffes made by Scott and other speakers, as a sort of “you think you’ve had it bad?” form of encouragement. There’s never a sense of the stories being inserted with a crowbar, fortunately – that’s a trait which usually annoys me intensely in sermons.

Evaluation

As you can probably tell already, I liked the book a lot. Scott is a good writer, and I strongly suspect he’s a great presenter too – I’ll be looking out for his name in conferences I’m going to, with the hope of hearing him first hand.

The real trick is actually applying the book to my own speaking though. It would be hard to miss one central point where I fail badly: practising. I simply don’t go through the whole “talking in the living room” thing. For a couple of talks I’ve gone through a dry-run at Google first as a small-scale tech talk, but usually I just put the slides (and code) together, make sure I know roughly what I’m going to say on each topic, and wing it. Assuming the book is accurate, this puts me firmly in the same camp as most speakers – which is somewhat reassuring, but doesn’t actually make my talks any better.

So, I’m going to try it. I’m expecting it to be hugely painful, but I’ll give it a go. I feel I somehow owe Scott that much – even though he makes it very clear that he expects most readers not to bother. Possibly putting it as a sort of challenge to exceed expectations is a deliberate ploy. More seriously, he convincingly makes the point that I owe the audience that much. We’ll see how it goes.

There are plenty of other aspects of the book which I expect to put to good use – particularly in terms of approaching the relevant topic to start with, writing down a big list of possible points, and whittling it down. I’m not going to promise to write a follow-up post trying to work out what’s helped and what hasn’t… I know perfectly well that I’d be unlikely to get round to writing it.

Conclusion

If you speak in public (which includes “internal” talks at work) I can heartily recommend this book as an entertaining and potentially incredibly helpful read.

We’ll see what happens next…

Speaking of which…

I’m delighted to announce that I’m going to be speaking at the Norwegian Developer Conference 2010 in Oslo in June. Rune Grothaug announced this with the very modest claim that my talk (combined with a Q&A with Mads Torgersen afterwards) could "alter the future of C# altogether". Well, I don’t know about that – but I’m very much looking forward to it nonetheless.

As I’m doing quite a bit of this public speaking lark at the moment, I thought it might be worth keeping an up-to-date list of my speaking engagements – and what better way than to have a Google Calendar for the job? You can browse the embedded version below, or subscribe to the ical feed from your own calendaring system.

I’ll try to keep this up-to-date, but you should be aware that some events may well be tentative – it’s probably best to check on the event’s web site, which will usually be linked in the description for the event.  Also note that I don’t always know which days I’ll be at an event – in order to keep a reasonable home life, I’ll often just be popping in for a day or two within a longer conference.

OMG Ponies!!! (Aka Humanity: Epic Fail)

(Meta note: I tried to fix the layout for this, I really did. But my CSS skills are even worse than Tony’s. If anyone wants to send me a complete sample of how I should have laid this out, I’ll fix it up. Otherwise, this is as good as you’re going to get :)

Last week at Stack Overflow DevDays, London I presented a talk on how humanity had made life difficult for software developers. There’s now a video of it on Vimeo – the audio is fairly poor at the very start, but it improves pretty soon. At the very end my video recorder ran out of battery, so you’ve just got my slides (and audio) for that portion. Anyway, here’s my slide deck and what I meant to say. (A couple of times I forgot exactly which slide was coming next, unfortunately.)

Click on any thumbnail for a larger view.

Good afternoon. This talk will be a little different from the others we’ve heard today… Joel mentioned on the podcast a long time ago that I’d talk about something "fun and esoteric" – and while I personally find C# 4 fun, I’m not sure that anyone could really call it esoteric. So instead, I thought I’d rant for half an hour about how mankind has made our life so difficult.

By way of introduction, I’m Jon Skeet. You may know me from questions such as Jon Skeet Facts, Why does Jon Skeet never sleep? and a few C# questions here and there. This is Tony the Pony. He’s a developer, but I’m afraid he’s not a very good one.

(Tony whispers) Tony wants to make it clear that he’s not just a developer. He has another job, as a magician. Are you any better at magic than development then? (Tony whispers) Oh, I see. He’s not very good at magic either – his repertoire is extremely limited. Basically he’s a one trick pony.

Anyway, when it comes to software, Tony gets things done, but he’s not terribly smart. He comes unstuck with some of the most fundamental data types we have to work with. It’s really not his fault though – humanity has let him down by making things just way too complicated.

You see, the problem is that developers are already meant to be thinking about difficult things… coming up with a better widget to frobjugate the scarf handle, or whatever business problem they’re thinking about. They’ve really got enough to deal with – the simple things ought to be simple.

Unfortunately, time and time again we come up against problems with core elements of software engineering. Any resemblance between this slide and the coding horror logo is truly coincidental, by the way. Tasks which initially sound straightforward become insanely complicated. My aim in this talk is to distribute the blame amongst three groups of people.

First, let’s blame users – or mankind as a whole. Users always have an idea that what they want is easy, even if they can’t really articulate exactly what they do want. Even if they can give you requirements, chances are those will conflict – often in subtle ways – with requirements of others. A lot of the time, we wouldn’t even think of these problems as "requirements" – they’re just things that everyone expects to work in "the obvious way". The trouble is that humanity has come up with all kinds of entirely different "obvious ways" of doing things. Mankind’s model of the universe is a surprisingly complicated one.

Next, I want to blame architects. I’m using the word "architect" in a very woolly sense here. I’m trying to describe the people who come up with operating systems, protocols, libraries, standards: things we build our software on top of. These are the people who have carefully considered the complicated model used by real people, stroked their beards, and designed something almost exactly as complicated, but not quite compatible with the original.

Finally, I’m going to blame us – common or garden developers. We have four problems: first, we don’t understand the complex model designed by mankind. Second, we don’t understand the complex model designed by the architects. Third, we don’t understand the applications we’re trying to build. Fourth, even when we get the first three bits right individually, we still screw up when we try to put them together.

For the rest of this talk, I’m going to give three examples of how things go wrong. First, let’s talk about numbers.

You would think we would know how numbers work by now. We’ve all been doing maths since primary school. You’d also think that computers knew how to handle numbers by now – that’s basically what they’re built on. How is it that we can search billions of web pages in milliseconds, but we can’t get simple arithmetic right? How many times are we going to see Stack Overflow questions along the lines of "Is double multiplication broken in .NET?"

I blame evolution.

We have evolved with 8 fingers and 2 thumbs – a total of 10 digits. This was clearly a mistake. It has led to great suffering for developers. Life would have been a lot simpler if we’d only had eight digits.

Admittedly this gives us three bits, which isn’t quite ideal – but having 16 digits (fourteen fingers and two thumbs) or 4 digits (two fingers and two thumbs) could be tricky. At least with eight digits, we’d be able to fit in with binary reasonably naturally. Now just so you don’t think I’m being completely impractical, there’s another solution – we could have just counted up to eight and ignored our thumbs. Indeed, we could even have used thumbs as parity bits. But no, mankind decided to count to ten, and that’s where all the problems started.

Now, Tony – here’s a little puzzle for you. I want you to take a look at this piece of Java code (turn Tony to face screen). (Tony whispers) What do you mean you don’t know Java? All right, here’s the C# code instead…

Is that better? (Tony nods enthusiastically) So, Tony, I want you to tell me the value of d after this line has executed. (Tony whispers)

Tony thinks it’s 0.3 Poor Tony. Why on earth would you think that? Oh dear. Sorry, no it’s not.

No, you were certainly close, but the exact value is:

0.299999 – Well, I’m not going to read it all out, but that’s the exact value. And it is an exact value – the compiler has approximated the 0.3 in the source code to the nearest number which can be exactly represented by a double. It’s not the computer’s fault that we have this bizarre expectation that a number in our source code will be accurately represented internally.

Let’s take a look at two more numbers… 5 and a half in both cases. Now it doesn’t look like these are really different – but they are. Indeed, if I were representing these two numbers in a program, I’d quite possibly use different types for them. The first value is discrete – there’s a single jump from £5.50 to £5.51, and those are exact amounts of money… whereas when we measure the mass of something, we always really mean “to two decimal places” or something similar. Nothing weighs exactly five and a half kilograms. They’re fundamentally different concepts, they just happen to have the same value. What do you do with them? Well, continuous numbers are often best represented as float/double, whereas discrete decimal numbers are usually best represented using a decimal-based type.

Now I’ve ignored an awful lot of things about numbers which can also trip us up – signed and unsigned, overflow, not-a-number values, infinities, normalised and denormal numbers, parsing and formatting, all kinds of stuff. But we should move on. Next stop, text.

Okay, so numbers aren’t as simple as we’d like them to be. Text ought to be easy though, right? I mean, my five year old son can read and write – how hard can it be? One bit of trivia – when I originally copied this out by hand, I missed out "ipsum." Note to self: if you’re going to copy out "lorem ipsum" the two words you really, really need to get at least those words right. Fail.

Of course, I’m sure pretty much everyone here knows that text is actually a pain in the neck. Again, I will blame humanity. Here we have two sets of people using completely different characters, speaking different languages, and quite possibly reading in different directions. Apologies if the characters on the right accidentally spell a rude word, by the way – I just picked a few random Kanji characters from the Unicode charts. (As pointed out in the comments, these aren’t actually Kanji characters anyway. They’re Katakana characters. Doh!) Cultural diversity has screwed over computing, basically.

However, let’s take the fact that we’ve got lots of characters as a given. Unicode sorts all that out, right? Let’s see. Time for a coding exercise – Tony, I’d like you to write some code to reverse a string. (Tony whispers) No, I’m not going to start up Visual Studio for you. (Tony whispers) You’ve magically written it on the next slide? Okay, let’s take a look.

Well, this looks quite promising. We’re taking a string, converting it into a character array, reversing that array, and then building a new string. I’m impressed, Tony – you’ve avoided pointless string concatenation and everything. (Tony is happy.) Unfortunately…

… it’s broken. I’m just going to give one example of how it’s broken – there are lots of others along the same lines. Let’s reverse one of my favourite musicals…

Here’s one way of representing Les Miserables as a Unicode string. Instead of using one code point for the “e acute”, I’ve used a combining character to represent the accent, and then an unaccented ASCII e. Display this in a GUI, and it looks fine… but when we apply Tony’s reversing code…

… the combining character ends up after the e, so we get an “s acute” instead. Sorry Tony. The Unicode designers with their fancy schemes have failed you.

EDIT: In fact, not only have the Unicode designers made things difficult, but so have implementers. You see, I couldn’t remember whether combining characters came before or after base characters, so I wrote a little Windows Forms app to check. That app displayed "Les Misu0301erables" as "Les Misérables". Then, based on the comments below, I checked with the standard – and the Unicode combining marks FAQ indicates pretty clearly that the base character comes before the combining character. Further failure points to both me and someone in Microsoft, unless I’m missing something. Thanks to McDowell for pointing this out in the comments. If I ever give this presentation again, I’ll be sure to point it out. WPF gets it right, by the way. Update: this can be fixed in Windows Forms by setting the UseCompatibleTextRendering property to false (or setting the default to false). Apparently the default is set to false when you create a new WinForms project in VS2008. Shame I tend to write "quick check" programs in a plain text editor…

Of course the basic point about reversal still holds, but with the correct starting string you’d end up with an acute over the r, not the s.

It’s not like the problems are solely in the realm of non-ASCII characters though. I present to you…

A line break. Or rather, one of the representations of a line break. As if the natural cultural diversity of humanity hasn’t caused enough problems, software decided to get involved and have line break diversity. Heck, we’re not even just limited to CR, LF and CRLF – Unicode has its own special line terminator character as well, just for kicks.

To prove this isn’t just a problem for toy examples, here’s something that really bit me, back about 9 or 10 years ago. Here’s some code which tries to do a case-insensitive comparison for the text "MAIL" in Java. Can anyone spot the problem?

It fails in Turkey. This is reasonably well known now – there’s a page about the “Turkey test” encouraging you to try your applications in a Turkish locale – but at the time it was a mystery to me. If you’re not familiar with this, the problem is that if you upper-case an “i” in Turkish, you end up with an “I” with a dot on it. This code went into production, and we had a customer in Turkey whose server was behaving oddly. As you can imagine, if you’re not aware of that potential problem, it can take a heck of a long time to find that kind of bug.

Here’s some code from a newsgroup post. It’s somewhat inefficient code to collapse multiple spaces down to a single one. Leaving aside the inefficiency, it looks like it should work. This was before we had String.Contains, so it’s using IndexOf to check whether we’ve got a double space. While we can find two spaces in a row, we’ll replace any occurrence of two spaces with a single space. We’re assigning the result of string.Replace back to the same variable, so that’s avoided one common problem… so how could this fail?

This string will cause that code to go into a tight loop, due to this evil character here. It’s a "zero-width non-joiner" – basically a hint that the two characters either side of it shouldn’t be squashed up too closely together. IndexOf ignores it, but Replace doesn’t. Ouch.

Now I’m not showing these examples to claim I’m some sort of Unicode expert – I’m really, really not. These are just corner cases I happen to have run into. Just like with numbers, I’ve left out a whole bunch of problems like bidi, encodings, translation, culture-sensitive parsing and the like.

Given the vast array of writing systems the world has come up with – and variations within those systems – any attempt to model text is going to be complicated. The problems come from the inherent complexity, some additional complexity introduced by things like surrogate pairs, and developers simply not having the time to become experts on text processing.

So, we fail at both numbers and text. How about time?

I’m biased when it comes to time-related problems. For the last year or so I’ve been working on the Google’s implementation of ActiveSync, mostly focusing on the calendar side of things. That means I’ve been exposed to more time-based code than most developers… but it’s still a reasonably common area, as you can tell from the number of related questions on Stack Overflow.

To make things slightly simpler, let’s ignore relativity. Let’s pretend that time is linear – after all, most systems are meant to be modelling the human concept of time, which definitely doesn’t include relativity.

Likewise, let’s ignore leap seconds. This isn’t always a good idea, and there are some wrinkles around library support. For example, Java explicitly says that java.util.Date and Calendar may or may not account for leap seconds depending on the host support. So, it’s good to know how predictable that makes our software… I’ve tried reading various explanations of leap seconds, and always ended up with a headache. For the purposes of this talk, I’m going to assert that they don’t exist.

Okay, so let’s start with something simple. Tony, what’s the time on this slide? (Tony whispers) Tony doesn’t want to answer. Anyone? (Audience responds.) Yes, about 5 past 3 on October 28th. So what’s the difference between now and the time on this slide? (Audience response.) No, it’s actually nearly twelve hours… this clock is showing 5 past 3 in the morning. Tony’s answer was actually the right one, in many ways… this slide has a hopeless amount of ambiguity. It’s not as bad as it might be, admittedly. Imagine if it said October 11th… Jeff and Joel would be nearly a month out of sync with the rest of us. And then even if we get the date and the time right, it’s still ambiguous… because of time zones.

Ah, time zones. My favourite source of WTFs. I could rant for hours about them – but I’ll try not to. I’d just like to point out a few of the idiosyncrasies I’ve encountered. Let’s start off with the time zones on this slide. Notice anything strange? (Audience or whisper from Tony) Yes, CST is there three times. Once for Central Standard Time in the US – which is UTC-6. It’s also Central Standard Time in Australia – where it’s UTC+9.30. It’s also Central Summer Time in Australia, where it’s UTC+10.30. I think it takes a special kind of incompetence to use the same acronym in the same place for different offsets.

Then let’s consider time zones changing. One of the problems I face is having to encode or decode a time zone representation from a single pattern – something like "It’s UTC-3 or -2, and daylight savings are applied from the third Sunday in March to the first Sunday in November". That’s all very well until the system changes. Some countries give plenty of warning of this… but on October 7th this year, Argentina announced that it wasn’t going to use daylight saving time any more… 11 days before its next transition. The reason? Their dams are 90% full. I only heard about this due to one of my unit tests failing. For various complicated reasons, a unit test which expected to recognise the time zone for Godthab actually thought it was Buenos Aires. So due to rainfall thousands of miles away, my unit test had moved Greenland into Argentina. Fail.

If you want more time zone incidents, talk to me afterwards. It’s a whole world of pain. I suggest we move away from time zones entirely. In fact, I suggest we adopt a much simpler system of time. I’m proud to present my proposal for coffee time. This is a system which determines the current time based on the answer to the question: "Is it time for coffee?" This is what the clock looks like:

This clock is correct all over the world, is very cheap to produce, and is guaranteed to be accurate forever. Batteries not required.

So where are we?

The real world has failed us. It has concentrated on local simplicity, leading to global complexity. It’s easy to organise a meeting if everyone is in the same time zone – but once you get different continents involved, invariably people get confused. It’s easy to get writing to work uniformly left to right or uniformly right to left – but if you’ve got a mixture, it becomes really hard to keep track of. The diversity which makes humanity such an interesting species is the curse of computing.

When computer systems have tried to model this complexity, they’ve failed horribly. Exhibit A: java.util.Calendar, with its incomprehensible set of precedence rules. Exhibit B: .NET’s date and time API, which until relatively recently didn’t let you represent any time zone other than UTC or the one local to the system.

Developers have, collectively, failed to understand both the models and the real world. We only need one exhibit this time: the questions on Stack Overflow. Developers asking questions around double, or Unicode, or dates and times aren’t stupid. They’ve just been concentrating on other topics. They’ve made an assumption that the core building blocks of their trade would be simple, and it turns out they’re not.

This has all been pretty negative, for which I apologise. I’m not going to claim to have a complete solution to all of this – but I do want to give a small ray of hope. All this complexity can be managed to some extent, if you do three things.

First, try not to take on more complexity than you need. If you can absolutely guarantee that you won’t need to translate your app, it’ll make your life a lot easier. If you don’t need to deal with different time zones, you can rejoice. Of course, if you write a lot of code under a set of assumptions which then changes, you’re in trouble… but quite often you can take the "You ain’t gonna need it" approach.

Next, learn just enough about the problem space so that you know more than your application’s requirements. You don’t need to know everything about Unicode – but you need to be aware of which corner cases might affect your application. You don’t need to know everything about how denormal number representation, but you may well need to know how rounding should be applied in your reports. If your knowledge is just a bit bigger than the code you need to write, you should be able to be reasonably comfortable.

Pick the right platforms and libraries. Yes, there are some crummy frameworks around. There are also some good ones. What’s the canonical answer to almost any question about java.util.Calendar? Use Joda Time instead. There are similar libraries like ICU – written by genuine experts in these thorny areas. The difference a good library can make is absolutely enormous.

None of this will make you a good developer. Tony’s still likely to mis-spell his "main" method through force of habit. You’re still going to get off by one errors. You’re still going to forget to close database connections. But if you can at least get a handle on some of the complexity of software engineering, it’s a start.

Thanks for listening.

Recent activities

It’s been a little while since I’ve blogged, and quite a lot has been going on. In fact, there are a few things I’d have blogged about already if it weren’t for “things” getting in the way.

Rather than writing a whole series of very short blog posts, I thought I’d wrap them all up here…

C# in Depth: next MEAP drop available soon – Code Contracts

Thanks to everyone who gave feedback on my writing dilemma. For the moment, the plan is to have a whole chapter about Code Contracts, but not include a chapter about Parallel Extensions. My argument for making this decision is that Code Contracts really change the feel of the code, making it almost like a language feature – and its applicability is almost ubiquitous, unlike PFX.

I may write a PFX chapter as a separate download, but I’m sensitive to those who (like me) appreciate slim books. I don’t want to “bulk out” the book with extra topics.

The Code Contracts chapter is in the final stages before becoming available to MEAP subscribers. (It’s been “nearly ready” for a couple of weeks, but I’ve been on holiday, amongst other things.) After that, I’m going back to the existing chapters and revising them.

Talking in Dublin – C# 4 and Parallel Extensions

Last week I gave two talks in Dublin at Epicenter. One was on C# 4, and the other on Code Contracts and Parallel Extensions. Both are now available in a slightly odd form on the Talks page of the C# in Depth web site. I no longer write “formal” PowerPoint slides, so the downloads are for simple bullet points of text, along with silly hand-drawn slides. No code yet – I want to tidy it up a bit before including it.

Podcasting with The Connected Show

I recently recorded a podcast episode with The Connected Show. I’m “on” for the second 2/3 of the show – about an hour of me blathering on about the new features of C# 4. If you can understand generic variance just by listening to me talking about it, you’re a smart cookie ;)

(Oh, and if you like it, please express your amusement on Digg / DZone / Shout / Kicks.)

Finishing up with Functional Programming for the Real World

Well, this hasn’t been taking much of my time recently (I bowed out of all the indexing etc!) but Functional Programming for the Real World is nearly ready to go. Hard copy should be available in the next couple of months… it’ll be really nice to see how it fares. Much kudos to Tomas for all his hard work – I’ve really just been helping out a little.

Starting on Groovy in Action, 2nd edition

No sooner does one book finish than another one starts. The second edition of Groovy in Action is in the works, which should prove interesting. To be honest, I haven’t played with Groovy much since the first edition of the book was finished, so it’ll be interesting to see what’s happened to the language in the meantime. I’ll be applying the same sort of spit and polish that I did in the first edition, and asking appropriately ignorant questions of the other authors.

Tech Reviewing C# 4.0 in a Nutshell

I liked C# 3.0 in a Nutshell, and I feel honoured that Joe asked me to be a tech reviewer for the next edition, which promises to be even better. There’s not a lot more I can say about it at the moment, other than it’ll be out in 2010 – and I still feel that C# in Depth is a good companion book.

MoreLINQ now at 1.0 beta

A while ago I started the MoreLINQ project, and it gained some developers with more time than I’ve got available :) Basically the idea is to add some more useful LINQ extension methods to LINQ to Object. Thanks to Atif Aziz, the first beta version has been released. This doesn’t mean we’re “done” though – just that we think we’ve got something useful. Any suggestions for other operators would be welcome.

Manning Pop Quiz and discounts

While I’m plugging books etc, it’s worth mentioning the Manning Pop Quiz – multiple choice questions on a wide variety of topics. Fabulous prizes available, as well as one-day discounts:

  • Monday, Sept 7th: 50% of all print books (code: pop0907)
  • Monday, Sept 14: 50% off all ebooks  (code: pop0914)
  • Thursday, Sept 17: $25 for C# in Depth, 2nd Edition MEAP print version (code: pop0917) + C# Pop Quiz question
  • Monday, Sept 21: 50% off all books  (code: pop0921)
  • Thursday, Sept 24: $12 for C# in Depth, 2nd Edition MEAP ebook (code: pop0924) + another C# Pop Quiz question

Future speaking engagements

On September 16th I’m going to be speaking to Edge UG (formerly Vista Squad) in London about Code Contracts and Parallel Extensions. I’m already very much looking forward to the Stack Overflow DevDays London conference on October 28th, at which I’ll be talking about how humanity has screwed up computing.

Future potential blog posts

Some day I may get round to writing about:

  • Revisiting StaticRandom with ThreadLocal<T>
  • Volatile doesn’t mean what I thought it did

There’s a lot more writing than coding in that list… I’d like to spend some more time on MiniBench at some point, but you know what deadlines are like.

Anyway, that’s what I’ve been up to and what I’ll be doing for a little while…

OS Jam at Google London: C# 4 and the DLR

Last night I presented for the first time at the Google Open Source Jam at our offices in London. The room was packed, but only a very few attendees were C# developers. I know that C# isn’t the most popular language on the Open Source scene, but I was still surprised there weren’t more people using C# for their jobs and hacking on Ruby/Python/etc at night.

All the talks at OSJam are just 5 minutes long, with 2 minutes for questions. I’m really not used to this format, and felt extremely rushed… however, it was still a lot of fun. I used a somewhat different approach to my slides than the normal “bullet points in PowerPoint” – and as it was only short, I thought I might as well effectively repeat the presentation here in digital form. (Apologies if the images are an inconvenient size for you. I tried a few different ones, and this seemed about right. Comments welcome, as I may do a similar thing in the future.)

First slide

Introductory slide. Colleagues forced me to include the askjonskeet.com link.

Second slide

.NET isn’t Open Source. You can debug through a lot of the source code for the framework if you agree to a “reference licence”, but it’s not quite the same thing.

Third slide

.NET isn’t Open Source, but the DLR is. And IronRuby. And IronPython. Yay!

And of course Mono is Open Source: the DLR and Mono play nicely together, and the Mono team is hoping to implement the new C# 4.0 features for the 2.8 release in roughly the same timeframe as Microsoft.

Fourth slide

This is what .NET 4.0 will look like. The DLR will be included in it, despite being open source. IronRuby and IronPython aren’t included, but depend heavily on the DLR. (Currently available versions allow you to use a “standalone” DLR or the one in .NET 4.0b1.)

C# doesn’t really depend on the DLR except for its handling of dynamic. C# is a statically typed language, but C# 4.0 has a new static type called dynamic which you can do just about anything with. (This got a laugh, despite being a simple and mostly accurate summary of the dynamic typing support in C# 4.0.)

Fifth slide

The fundamental point of the DLR is to handle call sites – decide what to do dynamically with little bits of code. Oh, and do it quickly. That’s what the caches are for. They’re really clever – particularly the L0 cache which compiles rules (about the context in which a particular decision is valid) into IL via dynamic methods. Awesome stuff.

I’m sure the DLR does many other snazzy things, but this feels like it’s the core part of it.

Sixth slide

At execution time, the relevant binder is used to work out what a call site should actually do. Unless, that is, the call has a target which implements the shadowy IDynamicMetaObjectProvider interface (winner of “biggest mouthful of a type name” prize, 2009) – in which case, the object is asked to handle the call. Who knows what it will do?

Seventh slide

Beautifully syntax-highlighted C# 4.0 source code showing the dynamic type in action. The method calls on lines 2 and 3 are both dynamic, even though in the latter case it’s just using a static method. Which overload will it pick? It all depends on the type of the actual value at execution time.

If I’d had more time, I’d have demonstrated how the C# compiler preserves the static type information it knows at compile time for the execution time binder to use. This is very cool, but would take far too long to demonstrate in this talk – especially to a bunch of non-C# developers.

Eighth slide There were a couple of questions, but I can’t remember them offhand. Someone asked me afterwards about how all this worked on non-.NET implementations (i.e. Mono, basically). I gather the DLR itself works, but I don’t know whether C# code compiled in the MS compiler will work at the moment – it embeds references to binder types in Microsoft.CSharp.dll, and I don’t know what the story is about that being supported on Mono.

This is definitely the format I want to use for future presentations. It’s fun to write, fun to present, and I’m sure the “non-professionalism” of it makes it a lot more interesting to watch. Although it’s slower to create text-like slides (such as the first and the last one) this way, the fact that I don’t need to find clip-art or draw boxes with painful user interfaces is a definite win – especially as I’m going to try to be much more image-biased from now on. (I don’t want people reading slides while I’m talking – they should be listening, otherwise it’s just pointless.)

List of talks now up on C# in Depth site

I think I’ve now done enough public speaking to make it worth having a page with resources etc, so I’ve added it to the C# in Depth site. Where feasible, it will include slides, code and videos (and anything else suitable, really) from various events.

I haven’t got any more booked at the moment, although I’ve had a couple of invitations I really ought to sort out at some point. A couple of points of interest:

  • I’ve included “before, during and after” code for the “LINQ to Objects in 60 minutes” presentation at DDD. The “before” code is just unit tests and shells of signatures etc. The “during” code is what I was able to get done during the event. The “after” code is a bit more complete – basically I implemented everything I already had unit tests for.
  • I’ve included a brief summary of each of the Copenhagen videos. The last two talks are the ones with the most “new” material for C# in Depth readers, although there are a few other nuggets scattered around the place. (Fun tip: if you download the wmv files, you can watch at 1.5x speed. It makes it look like I can type really fast!)

The video from DDD isn’t up yet, but when it is I’ll edit the page.