Why boxing doesn’t keep me awake at nights

I’m currently reading the (generally excellent) CLR via C#, and I’ve recently hit the section on boxing. Why is it that authors feel they have to scaremonger about the effects boxing can have on performance?

Here’s a piece of code from the book:

using System;

public sealed class Program {
   public static void Main() {
      Int32 v = 5;   // Create an unboxed value type variable.

#if INEFFICIENT
      // When compiling the following line, v is boxed
      // three times, wasting time and memory
      Console.WriteLine(“{0}, {1}, {2}”, v, v, v);
#else
      // The lines below have the same result, execute
      // much faster, and use less memory
      Object o = v;

      // No boxing occurs to compile the following line.
      Console.WriteLine(“{0}, {1}, {2}”, o, o, o);
#endif
   }
}

In the text afterwards, he reiterates the point:

This second version executes much faster and allocates less memory from the heap.

This seemed like an overstatement to me, so I thought I’d try it out. Here’s my test application:

using System;
using System.Diagnostics;

public class Test
{
    const int Iterations = 10000000;
   
    public static void Main()
    {
        Stopwatch sw = Stopwatch.StartNew();
        for (int i=0; i < Iterations; i++)
        {
#if CONSOLE_WITH_BOXING
            Console.WriteLine(“{0} {1} {2}”, i, i, i);           
#elif CONSOLE_NO_BOXING
            object o = i;
            Console.WriteLine(“{0} {1} {2}”, o, o, o);
#elif CONSOLE_STRINGS
            string s = i.ToString();
            Console.WriteLine(“{0} {1} {2}”, s, s, s);
#elif FORMAT_WITH_BOXING
            string.Format(“{0} {1} {2}”, i, i, i);
#elif FORMAT_NO_BOXING
            object o = i;
            string.Format(“{0} {1} {2}”, o, o, o);
#elif FORMAT_STRINGS
            string s = i.ToString();
            string.Format(“{0} {1} {2}”, s, s, s);
#elif CONCAT_WITH_BOXING
            string.Concat(i, ” “, i, ” “, i);
#elif CONCAT_NO_BOXING
            object o = i;
            string.Concat(o, ” “, o, ” “, o);
#elif CONCAT_STRINGS           
            string s = i.ToString();
            string.Concat(s, ” “, s, ” “, s);
#endif           
        }
        sw.Stop();
        Console.Error.WriteLine(“{0}ms”, sw.ElapsedMilliseconds);
    }
}

I compiled the code with one symbol defined each time, with optimisations and without debug information, and ran it from a command line, writing to nul (i.e. no disk or actual console activity). Here are the results:

Symbol Results (ms) Average (ms)
CONSOLE_WITH_BOXING 33054 33444
  33898  
  33381  
CONSOLE_NO_BOXING 34638 33451
  32423  
  33294  
CONSOLE_STRINGS 29259 28337
  29071  
  26683  
FORMAT_WITH_BOXING 17143 17210
  18100  
  16389  
FORMAT_NO_BOXING 15814 15657
  15936  
  15222  
FORMAT_STRINGS 9178 8999
  9077  
  8742  
CONCAT_WITH_BOXING 12056 12563
  14304  
  11329  
CONCAT_NO_BOXING 11949 12240
  13145  
  11628  
CONCAT_STRINGS 5833 5936
  6263  
  5713  

So, what do we learn from this? Well, a number of things:

  • As ever, microbenchmarks like this are pretty variable. I tried to do this on a “quiet” machine, but as you can see the results varied quite a lot. (Over two seconds between best and worst for a particular configuration at times!)
  • The difference due to boxing with the original code in the book is basically inside the “noise”
  • The dominant factor of the statement is writing to the console, even when it’s not actually writing to anything real
  • The next most important factor is whether we convert to string once or three times
  • The next most important factor is whether we use String.Format or Concat
  • The least important factor is boxing

Now I don’t want anyone to misunderstand me – I agree that boxing is less efficient than not boxing, where there’s a choice. Sometimes (as here, in my view) the “more efficient” code is slightly less readable – and the efficiency benefit is often negligible compared with other factors. Exactly the same thing happened in Accelerated C# 2008, where a call to Math.Pow(x, 2) was the dominant factor in a program again designed to show the efficiency of avoiding boxing.

The performance scare of boxing is akin to that of exceptions, although I suppose it’s more likely that boxing could cause a real performance concern in an otherwise-well-designed program. It used to be a much more common issue, of course, before generics gave us collections which don’t require boxing/unboxing to add/fetch data.

In short: yes, boxing has a cost. But please look at it in context, and if you’re going to start making claims about how much faster code will run when it avoids boxing, at least provide an example where it actually contributes significantly to the overall execution cost.

26 thoughts on “Why boxing doesn’t keep me awake at nights”

  1. A comment from Tom Kirby-Green which for some reason I can’t approve in its original form:

    “Yeah, I’ve encountered an over developed fear of boxing performance amongst .NET developers. Lots of books seem to toll the bell about boxing but then don’t give sufficient coverage to things like: how do you achieve deterministic clean-up (i.e. RAII) in C#, or the power and beauty of ‘yield return’ etc. I tend to think authors like boxing as a topic because it’s a excuse to show some IL and thus gain some ‘hardcore’ kudos.”

    Like

  2. I think you are making a mountain out of a molehill here. If you were to take the text of most books literally then you would find a lot to be exaggerated and in this case it has some justice.

    Books are largely subjective anyway and my own opinion is that CLR Via C# probably has the best description of generics to date. That said I’m sure if I picked the book up now and scanned for slightly over zealous use of language when describing a feature I would do so. And I would do so for the other books on my shelf from the ones deemed to be rubbish (actually I don’t own any of these any more) to the ones that have legendary status.

    Let it go…

    Like

  3. @Granville: The problem is that people read books and assume they are being accurate. People *will* go to extraordinary lengths to avoid boxing without stopping to think about whether it really is going to hurt their performance. After all, the book tells them it will execute “much faster” and who are they to question someone of Jeff Richter’s stature?

    I’ve seen people claim that a couple of hundred exceptions thrown in an *hour* of a web server’s lifetime will kill performance. Myths around performance build up very easily and are hard to quash.

    This is why authors have a duty of care *not* to exaggerate, and to be as accurate as is humanly possible.

    (And yes, I suspect I’ve exaggerated in my book too. I know I’ve made some mistakes. Where I have, I make full apology and will certainly correct it if it’s pointed out to me.)

    Like

  4. I believe the best approach, is, instead of saying that X is inefficient, state X’s overhead in approximate figures. Then people can easily work out whether this inefficiency actually matters in any given situation. Of course the exact figure will vary according to CPU, etc., but the order of magnitude generally does not. I did this when writing on such topics as Reflection and multithreading. For example, a dynamic method invocation typically has an overhead of a few microseconds; an uncontended lock around 100ns, generating a RSA keypair, 100ms. You can see from this that there’s SIX orders of magnitude difference between the last two activities, and yet I’ve seen them both described as “slow”.

    Providing actual figures is always more useful than saying merely that something is “inefficient” or answering the question “What is it’s overhead?” with the non-answer “It depends”.

    Like

  5. @Joe: Absolutely – that sounds ideal to me, and your example of two radically different operations being described as just “slow” is a cracking one :)

    Like

  6. I think you’re being far too literal here. Of course console output or string formatting will completely dominate any possible overhead from boxing three variables. I’m pretty sure Jeff Richter knows that, too. Since you’re usually a very reasonable chap I must say I’m surprised you wasted your time on these fairly silly benchmarks that merely prove the obvious.

    When I read the book the thought that boxing would slow down console output never even occurred to me. Richter writes many pages on boxing and unboxing, and IMO it’s perfectly clear in the context of these pages that Console.WriteLine is merely an example of a method call that performs implicit boxing.

    I’m not sure if programmers who would actually take such a small quote out of context and base their performance assumptions on it actually exist in reality, but if they do I would blame them for this misunderstanding, not the author.

    Like

  7. @Chris: You could certainly blame the reader for naivety, but I think there’s a duty for the author to pick examples which actually make their point accurately.

    Even if it’s easy to see what was meant, there can be little doubt that the book is *just plain wrong* when it claims that “The second version executes much faster.”

    Further, slips like this reinforce the micro-optimisation mentality which can be so damaging to readability. Suppose the book had made the point like this: “The first version boxes three times, which is wasteful – but probably irrelevant compared with the string formatting and console output. Be aware of performance, but keep a sense of context.” That would have been great! But no, instead a frankly alarmist tone was taken. I probably wouldn’t have gone to the trouble of benchmarking it if I hadn’t seen the same tone and attitude expressed in numerous forums. People really *don’t* take context into account nearly enough. If, by writing this blog post, just a handful of developers adopt a more contextual view of performance, I’ll be very happy. (Obviously I’ll never know, but there we go…)

    I agree that Jeff Richter is bound to be aware of the dominant factors here – so the question remains as to why he picked an example which didn’t actually demonstrate his point.

    I do want to stress how good the rest of the book is, btw. This is a relatively isolated incident *in this book* but it’s a symptom of a wider problem, and it happened to be a particularly simple claim to disprove. I started off just with the two options presented in the book – but then became interested in exactly where the rest of the time was going, hence the other options.

    Like

  8. So your benchmarks were aimed at people who might find this page through Google when they are searching for “C# boxing performance” or the like? That makes much more sense, I didn’t realize your intention here.

    Regardless of the general confusion about .NET performance issues, I still don’t think there’s any great danger of readers misunderstanding “CLR via C#” in particular, since it’s aimed at an advanced audience that would likely understand the context and intention of the example. But perhaps Richter would agree to insert a disclaimer like the one you suggest in the next edition — I presume you’ll contact him, as usual?

    Like

  9. Well, my post is aimed at a number of things – people searching for data, people interested in books and accuracy, and also as a place I can direct people if they start referring to books with examples of boxing as if they’re gospel truth.

    And yes, I’ll certainly be contacting Jeff before my review (which will be a while anyway). The errata sheet is going to be relatively bare though, if it keeps going in its current form…

    Like

  10. I totally agree that authors have a responsibility for being accurate, totally agree BUT maybe it is the reader who ought to change their stance then?

    Throughout I have always viewed books as merely an aid, I don’t take it word for word. I like to verify things myself (much like you have done here). I think the reason I take this stance, particularly with programming books is that many contradict one another.

    I’ve always felt that the burden is on the reader. Sure you can appreciate what someone else is telling you but its best to come to your own conclusions.

    That’s my two pence on the subject ;-)

    Like

  11. I think there’s room for movement on both sides. Unfortunately in this case, there are plenty of books which *don’t* contradict each other but all overstate the importance of boxing when it comes to performance (without enough caveats about context, profiling, micro-optimisation etc).

    Readers should be alert and ready to check things which sound wrong – but there are quite a few things which don’t actually sound wrong but *are* wrong. (For instance, a writer describing some C# behaviour as if it matched Java when it didn’t – that would sound right to a reader familiar with Java, so they may readily believe it, even if reality is different.)

    In terms of efficiency, I think it makes a lot more sense for the author to get it right though – that’s only one person (or the author + tech reviewers) checking, rather than potentially thousands of readers. Obviously it’s nigh-on impossible to be perfect, but we can aim high :)

    Like

  12. > I think there’s room for movement on both sides.

    Again, I agree. The problem here is that if you did take the advice of what an author said and it was hideously/slightly wrong and when prompted to explain why you did that you simply stated “Well, said that was the best way to go” The onus is surely on the reader to verify what they are reading is correct.

    One should always read several books on the same subject if only to gain a more rounded view of the topic in question. Reading one book and formulating an opinion based on one book is not really a valid opinion unless you have a vast amount of knowledge on one of the other subject areas it touches upon.

    I know that you have reviewed now several books on C# 3.0 (have you read the updated Anders one? Its pretty good, I find the annotations a nice touch) and I would suggest that your readers read as many of them as they can lay their hands on if only to gain that rounded perspective.

    > Obviously it’s nigh-on impossible to be perfect, but we can aim high :)

    Indeed.

    Like

  13. @Granville: Which book do you mean by “the updated Anders one”? I’m hoping to get hold of the Annotated C# 3.0 spec when it comes out, but I haven’t got it yet.

    Like

  14. Its called: The C# Programming Language, Third Edition. Was released on October 10th according to Safari. Amazon (US) – http://www.amazon.com/Programming-Language-Microsoft-NET-Development/dp/0321562992

    The good thing about this edition is that it has annotations in it like the framework design guidelines book. For the most part it seems the same as the other editions bar the new stuff and annotations.

    ‘Annotated C# 3.0 spec’ – are we on about the same book here?

    Like

  15. I’ve just realised that its not the the 10th of October yet ;-( Sorry, shows how out of touch I am with days/dates at the moment.

    So, Safari has it right now (electronic of course) available but Amazon (US, print) says its not out until the 20th of this month.

    Like

  16. Yup, that’s the book I’ll be getting when it’s available in the US. I think they’ve changed the title though – it used to include the word “Annotated” right there.

    I already have the 2.0 edition, signed by Anders, Scott and Peter :)

    Like

  17. @Chris, Granville

    Of course you cannot take everything you read in a book as 100% gospel; even good authors sometimes make mistakes and are sometimes just plain wrong.

    But at the same time, part of the reason that developers read books is that there is way more out there to learn than they could ever possibly learn on their own through experimentation. The software world is expanding far too quickly, no more so than in the .NET space. So developers read books by people who are supposedly already experienced in a particular topic as an efficient way of assimilating new information. If the author is wrong, or is exaggerating, then the reader is assimilating WRONG information, which undermines the point of reading the book in the first place. Sure, the reader could go out and verify everything he reads for himself, but then what is the point of reading the book?

    That is why it is so important for authors to make every effort to be as accurate as possible; like it or not, what they say is often the first and possibly the last impression that a developer will get on a particular subject, and if they care at all about being positive contributors to the development community, they need to do their best to make sure that impression is the right one.

    Like

  18. @David – I look at books as being a catalyst for further investigation.

    Maybe my approach is this because the books I tend to read are not as clear cut like most programming books are. This is just the way I feel about them in general.

    Like

  19. Ummm, I think the _NO_BOXING version is misnamed. Surely this line:

    object o = i;

    involves boxing?

    Of course the new C++0x variable template typeargs will put an end to this discussion for a while (because all the ToString calls inside Format can be statically bound to the correct version in the C++0x version and even inlined). This will give the first version the performance of the last one. Well, maybe only the FORMAT_STRINGS version, still over 3x faster.

    Like

  20. Ben: Absolutely. It should have been called “SINGLE_BOX” or something like that. I noticed it after I’d already done everything else, I’m afraid :(

    Like

  21. @Granville, I personally do not have time to read many books. I tend to spend time skimming topics online, and then interrogating ones I find of interest further. If I do manage to find time to read an educational text I certainly wouldn’t then seek to read another 5 on exatly the same subject just to see if they all correlate. When one’s time is at such a premium it is simply not possible to validate every morcel of information. When I attend a seminar or purchase an educational text I am paying for a service that I expect to deliver accurate information.

    I do concur that readers should try to take things with a pinch of salt when reading around a subject. When dealing with a topic as subjective as performance the author should provide some quantative information to back up his or her claims. This will aid the reader by providing a context against which an opinion can be formed.

    Like

  22. Hello all,

    I wrote a performance test to see what kind of results I would get. On my machine I find the overhead of boxing to be negligable (on average 20ms when performing 1 million boxing operations).

    I thought I’d share my code listing and open myself up to any comments… I’d also like to know if anyone knows if there are any compiler performance optimisations being enforced that are skewing my results.

    Let me know what you guys think. :)

    Thanks,

    Matt
    —————

    //define how many boxing operation we will perform
    const int LOOPS = 1000000;
    //an object to box to
    static object testObject;
    //an object to not box to
    static double testDouble;

    //a random generator
    static Random random = new Random();

    ///
    /// Gets a list of random numbers, so we can use random numbers but move their
    /// overhead to a non critical execution time.
    ///
    private static List getRandomNumbers()
    {
    //create a list to store the results in
    List result = new List();

    //create enough random numbers for the box test
    for (int loop = 0; loop < LOOPS; loop++)
    {
    //record the numbers
    result.Add(random.NextDouble());
    }

    //return our random numbers
    return result;
    }

    ///
    /// Performs a test to determine the overhead caused by boxing.
    ///
    static double boxTest()
    {
    //get a list of random numbers
    var randomNumbers = getRandomNumbers();
    //record the start time
    var start = DateTime.Now;

    //go through all the random numbers
    foreach(var randomNumber in randomNumbers)
    {
    //perform an assignment without boxing
    testDouble = randomNumber;
    }

    //record the duration of the “non boxing” example
    var nonBoxDuration = DateTime.Now – start;

    //reset the start time
    start = DateTime.Now;

    //go through all the random numbers
    foreach (var randomNumber in randomNumbers)
    {
    //perform an assignment with boxing
    testObject = (object)randomNumber;
    }

    //record the duration of the boxing example
    var boxDuration = DateTime.Now – start;

    //return the difference in milliseconds
    return boxDuration.TotalMilliseconds – nonBoxDuration.TotalMilliseconds;
    }

    ///
    /// Performs the box test multiple times and performs a list of the results in milliseconds.
    ///
    private static List getBoxTestResults()
    {
    //make a list to record the results
    List results = new List();

    //perform the box test 1000 times and record the result
    for (int loop = 0; loop < 100; loop++)
    {
    //add the box test result to our list of results
    results.Add(boxTest());
    }

    //return the results
    return results;
    }

    ///
    /// The Main entry point of the program.
    ///
    static void Main(string[] args)
    {
    //get the box test results
    var boxTestResults = getBoxTestResults();
    //get the maximum difference (ms)
    var max = boxTestResults.Max();
    //get the minimum difference (ms)
    var min = boxTestResults.Min();
    //get the average difference (ms)
    var average = boxTestResults.Average();
    //output the results
    Console.WriteLine(“Max: {0}, Min: {1}, Avg: {2}”, max, min, average);
    }

    Like

  23. Apparently though C# language designers felt that boxing was a big enough issue to introduce generics. Maybe boxing doesn’t keep you awake at night (anymore) but it used to, at least a little.

    Boxing is important, and many developers hit a roadblock because they don’t understand it.

    String concatenations instead of string builders still keep me up at night too.

    Like

  24. @Anon: Generics has *far* bigger benefits in terms of exressiveness than the benefit of avoiding boxing. (Note that Java introduced generics without the benefits that C# has in terms of performance.)

    I doubt that many developers *really* hit a “roadblock” due to boxing. They may have hit a conceptual roadblock, but rarely a performance one. Yes, it hurts performance – but rarely enough to actually stop you making progress.

    Like

  25. Where’s the upvote button? ;-)

    This is the kind of benchmarks that I like to read: actually comparing different factors that might have an influence. This can definitely serve as a reference for future discussions.

    Like

Leave a comment