The Open-Closed Principle, in review

Background

I’ve been to a few talks on SOLID before. Most of the principles seem pretty reasonable to me – but I’ve never "got" the open-closed principle (OCP from here on). At CodeMash this year, I mentioned this to the wonderful Cori Drew, who said that she’d been at a user group talk where she felt it was explained well. She mailed me a link to the user group video, which I finally managed to get round to watching last week. (The OCP part is at around 1 hour 20.)

Unfortunately I still wasn’t satisfied, so I thought I’d try to hit up the relevant literature. Obviously there are umpteen guides to OCP, but I decided to start with Wikipedia, and go from there. I mentioned my continuing disappointment on Twitter, and the conversation got lively. Uncle Bob Martin (one of the two "canonical sources" for OCP) wrote a follow-up blog post, and I decided it would be worth writing one of my own, too, which you’re now reading.

I should say up-front that in some senses this blog post isn’t so much about the details of the open-closed principle, as about the importance of careful choice of terminology at all levels. As we’ll see later, when it comes to the "true" meaning of OCP, I’m pretty much with Uncle Bob: it’s motherhood and apple pie. But I believe that meaning is much more clearly stated in various other principles, and that OCP as the expression of an idea is doing more harm than good.

Reading material

So what is it? (Part 1 – high level)

This is where it gets interesting. You see, there appear to be several different interpretation of the principle – some only subtly distinct, others seemingly almost unrelated. Even without looking anything up, I knew an expanded version of the name:

Modules should be open for extension and closed for modification.

The version quoted in Wikipedia and in Uncle Bob’s paper actually uses "Software entities (classes, modules, functions, etc.)" instead of modules, but I’m not sure that really helps. Now I’m not naïve enough to expect everything in a principle to be clear just from the title, but I do expect some light to be shed. In this case, unfortunately I’m none the wiser. "Open" and "closed" sound permissive and restrictive respectively, but without very concrete ideas about what "extension" and "modification" mean, it’s hard to tell much more.

Fair enough – so we read on to the next level. Unfortunately I don’t have Bertrand Meyer’s "Object-Oriented Software Construction" book (which I take to be the original), but Uncle Bob’s paper is freely available. Wikipedia’s summary of Meyer’s version is:

The idea was that once completed, the implementation of a class could only be modified to correct errors; new or changed features would require that a different class be created. That class could reuse coding from the original class through inheritance. The derived subclass might or might not have the same interface as the original class.

Meyer’s definition advocates implementation inheritance. Implementation can be reused through inheritance but interface specifications need not be. The existing implementation is closed to modifications, and new implementations need not implement the existing interface.

And Uncle Bob’s high level description is:

Modules that conform to the open-closed principle have two primary attributes.

  1. They are "Open For Extension". This means that the behavior of the module can be extended. That we can make the module behave in new and different ways as the requirements of the application change, or to meet the needs of new applications.
  2. They are "Closed for Modification". The source code of such a module is inviolate. No one is allowed to make source code changes to it.

I immediately took a dislike to both of these descriptions. Both of them specifically say that the source code can’t be changed, and the description of Meyer’s approach to "make a change by extending a class" feels like a ghastly abuse of inheritance to me… and goes firmly against my (continued) belief in Josh Bloch’s advice of "design for inheritance or prohibit it" – where in the majority of cases, designing a class for inheritance involves an awful lot of work for little gain. Designing an interface (or pure abstract class) still involves work, but with fewer restrictions and risks.

Craig Larman’s article uses the term "closed" in a much more reasonable way, to my mind:

Also, the phrase "closed with respect to X" means that clients are not affected if X changes.

When I say "more reasonable way" I mean in terms of how I want to write code… not in terms of the use of the word "closed". This is simply not how the word "closed" is used elsewhere in my experience. In the rare cases where "closed" is used with "to", it’s usually in terms of what’s not allowed in: "This bar is closed to under 18s" for example. Indeed, that’s how I read "closed to modification" and that appears to be backed up by the two quotes which say that once a class is complete, the source code cannot be changed.

Likewise the meaning of "open for extension" seems unusual to me. I’d argue that the intuitive meaning is "can be extended" – where the use of the term "extended" certainly nods towards inheritance, even if it’s not intended meaning. If the idea is "we can make the module behave differently" – as Uncle Bob’s description suggests – then "open for extension" is a very odd way of expressing that idea. I’d even argue that in the example given later, it’s not the "open" module that behaves differently – it’s the combination of the module and its collaborators, acting as a unified program, which behaves differently after some aspects are modified.

So what is it? (Part 2 – more detail)

Reading on through the rest of Uncle Bob’s paper, the ideas become much more familiar. There’s a reasonable example of a function which is asked to draw a collection of shapes: the bad code is aware of all the types of shape available, and handles each one separately. The good code uses an abstraction where each shape (Circle, Square) knows how to draw itself and inherits from a common base class (Shape). Great stuff… but what’s that got to do with what was described above? How are the concepts of "open" and "closed" clarified?

The answer is that they’re not. The word "open" doesn’t occur anywhere in the rest of the text, other than as part of the term "open-closed principle" or as a label for "open client". While it’s perhaps rather easier to see this in hindsight, I suspect that any time a section which is meant to clarify a concept doesn’t use some of the key words used to describe the concept in a nutshell, that description should be treated as suspect.

The word "closed" appears more often, but only in terms of "closed against" which is never actually defined. (Is "closed against" the same as "closed for"?) Without Craig Larman’s explanation, sentences like this make little sense to me:

The function DrawAllShapes does not conform to the open-closed principle because it cannot be closed against new kinds of shapes.

Even Craig’s explanation feels somewhat at odds with Uncle Bob’s usage, as it talks about clients being affected. This is another of the issues I have with the original two descriptions: they talk about a single module being open/closed, whereas we’re dealing with abstractions where there are naturally at least two pieces of code involved (and usually three). Craig’s description of changes in one module not affecting clients is describing a relationship – which is a far more useful way of approaching things. Even thinking about the shape example, I’m getting increasingly confused about exactly what’s open and what’s closed. It feels to me like it’s neither the concrete shape classes nor the shape-drawing code which is open or closed – it’s the interface between the two; the abstract Shape class. After all, these statements seem reasonable:

  • The Shape class is open for extension: there can be many different concrete subclasses, and code which only depends on the Shape class doesn’t need to know about them in order to use them when they are presented as shapes.
  • The Shape class is closed for modification: no existing functions can be removed (as they may be relied on by existing clients) and no new pure virtual functions can be added (as they will not be implemented by existing subclasses).

It’s still not how I’d choose to express it, but at least it feels like it makes sense in very concrete terms. It doesn’t work well with how Uncle Bob uses the term "closed" though, so I still think I may be on a different page when it comes to that meaning. (Uncle Bob does also make the point that any significant program isn’t going to adhere to the principle completely in every part of the code – but in order to judge where it’s appropriate to be closed, I do really need to understand what being closed means.)

Just to make it crystal clear, other than the use of the word "closed," the low-level description of what’s good and what’s bad, and why, is absolutely fine. I really have no problems with it. As I said at the start, the idea being expressed makes perfect sense. It just doesn’t work (for me) when expressed in the terms used at a higher level.

Protected Variation

By contrast, let’s look at a closely related idea which I hadn’t actually heard about before I started all this research: protected variation. This name was apparently coined by Alistair Cockburn, and Craig Larman either quotes or paraphrases this as:

Identify points of predicted variation and create a stable interface around them.

Now that’s a description I can immediately identify with. Every single word of it makes sense to me, even without reading any more of Craig’s article. (I have read the rest, obviously, and I’d encourage others to do so.) This goes back to Josh Bloch’s "design for inheritance or prohibit it" motto: identifying points of predicted variation is hard, and it’s necessary in order to create a stable interface which is neither too constrictive for implementations nor too woolly for clients. With class inheritance there’s the additional concern of interactions within a class hierarchy when a virtual method is called.

So in Uncle Bob’s Shape example, all there is is a point of predicted variation: how the shape is drawn. PV suggests the converse as well – that as well as points of predicted variation, there may be points which will not vary. That’s inherent in the API to some extent – every shape must be capable of drawing itself with no further information (the Draw method has no parameters) but it could also be extended to non-virtual aspects. For example, we could decide that every shape has one (and only one) colour, which will not change during its lifetime. That can be implemented in the Shape class itself – with no predicted variation, there’s no need of polymorphism.

Of course, the costs of incorrectly predicting variation can be high: if you predict more variation than is actually warranted, you waste effort on over-engineering. If you predict less variation than is required, you usually end up either having to change quite a lot of code (if it’s all under your control) or having to come up with an "extended" interface. There’s the other aspect of shirking responsibility on this predicted variation to some extent, by making some parts "optional" – that’s like saying, "We know implementations will vary here in an incompatible way, but we’re not going to try to deal with it in the API. Good luck!" This usually arises in collection APIs, around mutating operations which may or may not be valid (based on whether the collection is mutable or not).

Not only is PV easy to understand – it’s easy to remember for its comedy value, at least if you’re a fan of The Hitchhiker’s Guide to the Galaxy. Remember Vroomfondel and Majikthise, the philosophers who invaded Cruxwan University just as Deep Thought was about to announce the answer to Life, the Universe, and Everything? Even though they were arguing with programmers, it sounds like they were actually the ones with software engineering experience:

"I’ll tell you what the problem is mate," said Majikthise, "demarcation, that’s the problem!"

[…]

"That’s right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!"

That sounds like a pretty good alternative description of Protected Variation to me.

Conclusion

So, that’s what I don’t like about OCP. The name, and the broad description – both of which I believe to be unhelpful, and poorly understood. (While I’ve obviously considered the possibility that I’m the only one who finds it confusing, I’ve heard enough variation in the explanations of it to suggest that I’m really not the only one.)

That sounds like a triviality, but I think it’s rather important. I suspect that OCP has been at least mentioned in passing in thousands if not tends of thousands of user groups and conferences. The purpose of such gatherings is largely for communication of ideas – and when a sound idea is poorly expressed, an opportunity is wasted. I suspect that any time Uncle Bob has personally presented it in detail, the idea has sunk in appropriately – possibly after some initial confusion about the terminology. But what about all the misinterpretations and "glancing blows" where OCP is only mentioned as a good thing that clearly everyone wants to adhere to, with no explanation beyond the obscure ones described in part one above? How many times did they shed more confusion than light?

I believe more people are familiar with Uncle Bob’s work on OCP than Bertrand Meyer’s. Further, I suspect that if Bertrand Meyer hadn’t already introduced the name and brief description, Uncle Bob may well have come up with far more descriptive ones himself, and the world would have been a better place. Fortunately, we do have a better name and description for a concept which is at least very closely related. (I’m not going to claim PV and OCP are identical, but close enough for a lot of uses.)

Ultimately, words matter – particularly when it comes to single sentence descriptions which act as soundbytes; shorthand for communicating a complex idea. It’s not about whether the more complex idea can be understood after carefully reading thorough explanations. It’s about whether the shorthand conveys the essence of the idea in a clear way. On that front, I believe the open-closed principle fails – which is why I’d love to see it retired in favour of more accessible ones.

Note for new readers

I suspect this post may end up being read more widely than most of my blog entries. If you’re going to leave a comment, please be aware that the CAPTCHA doesn’t work on Chrome. I’m aware of this, but can’t fix it myself. If you right-click on the broken image and select "open in new tab" you should get a working image. Apologies for the inconvenience.

43 thoughts on “The Open-Closed Principle, in review”

  1. You most certainly aren’t alone in this.

    I’ve seen many variations to the OCP explanation, and even just seeing Bob’s various explanations over the years does not really make it better. The only thing that really makes sense to me is this explanation: “What it means is that you should strive to get your code into a position such that, when behavior changes in expected ways, you don’t have to make sweeping changes to all the modules of the system. Ideally, you will be able to add the new behavior by adding new code, and changing little or no old code.”

    I wasn’t familiar with the term Protected Variation, but now that I do, I find it a clear and concise explanation of an idea. Thanks for that :)

    Like

  2. Unfortunately OCP is often understood as “sealed classes” and “composite pattern” … therefore sealing should be awesome and should be done every time. I definitely believe methods should be sealed by default, to prevent modification, but classes should not be sealed, as that prevents extension.

    Like

  3. I think you got “protected” wrong in the paragraph about “predicted” variation, or is there something fundamentally basic here I’m not grasping?

    Like

  4. Doesn’t it all just boil down the proper use of abstractions as discussed way back when Dijkstra introduced the notion of “level of abstraction”. (The Structure of the” THE”-Multiprogramming, Dijkstra 1968)

    Click to access p341-dijkstra.pdf

    Later elaborated on in great detail by e.g. Liskov and others … (A design methodology for reliable software systems, Liskov 1972)
    http://dl.acm.org/citation.cfm?id=1480018

    For me personally there is only one principle I keep in the back of my mind constantly, which is “Don’t Repeat Yourself”. I often feel the other principles are just different instantiations with the same end goal.

    Like

  5. @Lasse: I don’t think so – it’s just that even the *better* principle is still slightly badly named. It really is called “protected variation” even though the description only talks about “predicting variation”. I believe the point is that it’s protected by being predicted. But I did wonder about that myself this morning, and rushed to check that I hadn’t misnamed it everywhere.

    Like

  6. Great article Jon. I’m a fan of the SOLID principals too and have used the acronym as much as anyone in design talks.

    However, I think that the OCP (or the better articulated PV) is one of those that should be used as a beacon to go toward but not necessarily a hard and fast rule that can’t be broken. Even if someone wanted to be a purest on enforcing and following this principal to a ‘T’ it would be almost impossible, based on some good comments you made:

    “Of course, the costs of incorrectly predicting variation can be high: if you predict more variation than is actually warranted, you waste effort on over-engineering. If you predict less variation than is required, you usually end up either having to change quite a lot of code (if it’s all under your control) or having to come up with an “extended” interface.”

    ‘Prediction’ or better yet ‘educated guesses’ based on the information one has at the time of creation is not enough to support this principal perfectly for the full lifetime of a software’s API.

    The other thing that gets me uneasy too with this principal if abused or misinterpreted, would be someone dictating a class could not change and only inheritance could be used for extension. Imagine what that code might look like after 5 years and dozens of enhancements? If the interface can ‘realistically’ change and the client could adhere to these new changes (in the example where we control both ends as often the case and not some unchangeable public API), then I support breaking this principal for blocking the potential of a string of inherited classes (as looked at from the use case at the start of this paragraph).

    By the way, really cool you (the guru of SO, C#, etc.) admitted not understanding this principal and blogging about it. Expand on Liskov next ;) +1 sir!

    Like

  7. I think you don’t do Meyer justice here. You cannot understand his comments without understanding the nature of Eiffel, and in particular its difference from most statically-typed programming languages.

    Personally, I find the “L” highly suspect, as subtype-based substitution actually fails miserably in a world of duck-typing.

    Like

  8. @RobG: That may well be the case – but if that’s so, it surely reinforces my argument that the general description of something which is particularly tailored towards Eiffel should *not* be applied to other languages as if it’s likely to be equally valid.

    Like

  9. Clearly it had to be called the “Open Closed Principle” to provide the “O” in “SOLID”.

    “SPLID” just doesn’t have that same marketable ring to it.

    Like

  10. I beleive Uncle Bob once said (on Hanselman Podcast) that if your code could comply to OCP in 100% percent, than introducing new feature would only require creating new code. Not modifing existing one”.

    For example using Strategy pattern (new class for new strategy) can be more opened for extention, than using enums (searching in code for all switch/if-else statments – that’s pure modification). They are usually used to solve the same problem, but the consequences of them are different.

    Like

  11. This principle seems like something that could only ever have come up in academia.

    But then I’m not a big fan of inheritance to begin with, it’s probably the most overused design pattern around and can easily lead to horrible code and interesting bugs down the line (Java being the perfect example for problems due to gratuitous inheritance [hello Properties]) and this seems to pretty much encouraging this behavior.

    Like

  12. I’m going to stick up for Meyer. The Wikipedia description of his principle couldn’t be worse. From Object-Oriented Software Construction (2nd edition):

    “Open-Closed principle
    Modules should be both open and closed.

    The contradiction between the two terms is only apparent as they correspond to goals
    of a different nature:

    • A module is said to be open if it is still available for extension. For example, it should
    be possible to expand its set of operations or add fields to its data structures.

    • A module is said to be closed if it is available for use by other modules. This assumes
    that the module has been given a well-defined, stable description (its interface in the
    sense of information hiding). At the implementation level, closure for a module also
    implies that you may compile it, perhaps store it in a library, and make it available
    for others (its clients) to use. In the case of a design or specification module, closing
    a module simply means having it approved by management, adding it to the project’s
    official repository of accepted software items (often called the project baseline), and
    publishing its interface for the benefit of other module authors.”

    The books first edition is from 1988, a darker age when object oriented languages and techniques were far from common. So it makes sense to put his principle in context. He describes a set of five criteria, rules and principles of modular design methodology and proposes OO programming as a paradigm to approach it. So with his “version” of the principle he basically is making a stand for OO languages (specifically inheritance and polymorphism) as tools for modular design, and by the way proposing Eiffel. And contrary to what Wikipedia says, Meyer’s definition don’t advocates for implementation inheritance, he just shows it as a possible solution while explaining the principle in the book.

    Meyers “closed” definition is just too vague, while I find the ones from Uncle Bobs just nonsensical and has nothing to do with the original. I agree that words matter, more even when you are teaching, and Uncle Bob should know it better that anyone as the agile evangelist he is. Funny he looks surprised as to why people “complains” when he gives an ambiguous definition which needs ten pages or an hour listening before to make sense.

    Like

  13. @ruben: It’s good to hear that Meyer’s version is better than the picture painted by Wikipedia. I wouldn’t just say his definition of “closed” is too vague though – I’d say it’s still a long way off what people normally understand by the word “closed”.

    One day I’ll see if I can get hold of the original book (last time I looked it was ridiculously expensive, but maybe there’s an ebook somewhere to buy)…

    Like

  14. I agree, it’s also unintuitive.

    Yeah, the book is quite expensive. Probably because it’s considered mostly an academic text these days and priced as such. If you are only interested in an ebook version, the physical book comes with a full HTML edition. So maybe you can find a cheap second hand copy with the CD included.

    Like

  15. Well, for me. the dumb and easy example would be the Jetty server in java. If I use it, by default I have little behaviour. I can serve static files via a handler and a few other thing that don’t do much.

    But I can easily extends Jetty by adding something that extends the Servlet interface. Jetty is opened for modification in the sense that I can it do pretty much anything if you throw servlets at it. But it is also closed. Except for maintainer, nobody use jetty by editing its source code. OK sure, you can fork it or send a pull request. But 99.99% of the time, you won’t change Jetty Source code while working with Jetty.

    Jetty itself is closed for modification (you don’t touch it’s source code), yet it is most definitively open for extension because no two Jetty in the world does the same job.

    ——–

    For a reverse example, I am gonna talk about a (now discontinued) framework in PHP to auto generate HTML form that was called ForGe (Form Generator).

    At some point in our project, the customer asked if we could add a short description to the form elements. Sadly, forge did not support this scenario. There was no “custom renderer” I could implement to add the description. We had to modify the source code of forge itself to add that behavior, and at that point, we weren’t able download update to the library anymore.

    Forge was not open enough for modification, and as such, we weren’t able to tailor it to our need while keeping its source code closed.

    ——

    OCP is most important when you have different modules/library/gem/whatever that aren’t developed by the same people. As a developer of library B, you should be able to use library A without constantly begging the developers of A to change this and that so that you can use it. In essence :

    – A should be made open so that it can be used by B without modification (and also by C and D.)
    – B should treat A as closed (you don’t touch it directly) so that you can benefits from its updates and bug fixes. Touching the internal of the dependency you use is a big no-no (unless you want to maintain your fork forever)

    IMHO, its not something that you should think of mostly at the class level (though it still make sense there). Its something that make sense at the library or layer level. If my domain logic use some piece of infrastructure logic, I should be able to use that infrastructure class without modifying it. There should be enough opening for me to insert the desired behavior.

    It can be by inheritance, but not only. Adding parameter to the method of the libraries is also a way to “open” it to extension. So is passing a callback, or listening to event raised by the object.

    ——

    So in essence : an interface opened enough that the user intention can slip trough, and closed internal that the user don’t have to edit when they want more behavior.

    Like

  16. In a way I sympathise with you because I know how you feel: over the years I asked myself many if not all of the questions you are asking yourself about the OCP.

    But on the other hand I think you have not tried hard enough. Faced with doubts, my reaction was to seek out and read as many definitions/explanations of the OCP as I could find. I was delighted when I ran into Larman’s article.

    How important is it to you to understand what the OCP means? Meyer formulated the OCP. Robert Martin reformulated it. Don’t you think you should read their books?

    Are you familiar with patterns like Expose Your Ignorance , Read Constantly and Study the Classics from Apprenticeship Patterns (http://ofps.oreilly.com/titles/9780596518387/construct_your_curriculum.html) ?

    In Getting a SOLID start https://sites.google.com/site/unclebobconsultingllc/getting-a-solid-start, Martin says:

    “the principles are definitively described in two books: Agile Software Development: Principles, Patterns, and Practices, and Agile Principles Patterns, and Practices in C#. ”

    Think you can understand the principle without reading one of these books?

    ASD PPP is a must read. It is pretty high up on most software development reading lists, e.g. http://www.noop.nl/2012/08/top-100-agile-books-edition-2012.html.

    Can’t afford books? Surely you have a friend (of a friend maybe?) who owns a copy? If not, why not go to a library?

    Even if you read the book, that is not enough. As Martin says in Getting a SOLID start:

    “‘There is no royal road to Geometry’ Euclid once said to a King who wanted the short version. Don’t expect to skim through the papers, or thumb through the books, and come out with any real knowledge. If you want to learn these principles well enough to be able to apply them, then you have to study them. The books are full of coded examples of principles done right and wrong. Work through those examples, and follow the reasoning carefully. This is not easy, but it is rewarding.”

    You stress “the importance of careful choice of terminology at all levels”. Do you think Martin does not try hard to convey what he means? He has been working at explaining the OCP for years. He is still at it: Clean Code, Episode 10 – The Open-Closed Principle (http://www.cleancoders.com/codecast/clean-code-episode-10/show). See his clean coders videos. He does the best he can.

    Of course as more and more people have a go at explaining it over the years, better definitions emerge. Why don’t you have a go? You seem to have the requisite intellect and stamina.

    Thank you for posting this, I found it useful.

    Like

  17. Jon, you said: “the description of Meyer’s approach to ‘make a change by extending a class’ feels like a ghastly abuse of inheritance to me”

    Meyer is fully aware that this is a bit of a hack, in fact he says that “one way to describe the open-closed principle and the consequent OO techniques, … is to think of them as ORGANIZED HACKING…instead of a normal hack, in which A is polluted with ‘if (that_special_case) then’…the organized form of hacking enables us to cater to the variants… without affecting the consistency of the original”

    Meyer does tell developers that:

    (1) If you have control over original s/w and can rewrite it so that it will address the needs of several kinds of clients, at no extra complication, …you should do so

    (2) The OCP principle and associated techniques are intended for the adaptation of healthy modules: neither OCP nor redefinition in inheritance is a way to address design flaws, let alone bugs. If there is something wrong with a module you should fix it…not leave the original alone and try to correct the problem in the derived module.

    Like

  18. Jon, you said: “There’s a reasonable example of a function which is asked to draw a collection of shapes: the bad code is aware of all the types of shape available, and handles each one separately. The good code uses an abstraction where each shape (Circle, Square) knows how to draw itself and inherits from a common base class (Shape). Great stuff… but what’s that got to do with what was described above? How are the concepts of ‘open’ and ‘closed’ clarified?”

    Does the following help at all?:

    The bad code (version one of function DrawAllShapes) knows all the types of shape available, so to get the function to handle a new shape, you have to modify the function: you have to change it so that it knows about the new type. So while the bad code is open to extension, i.e. it can be enhanced so that it handles a new type, it is NOT closed against the addition of new shapes, because doing so requires you to open up the function and change it.

    The good code (the second version of function DrawAllShapes) however does not know all the types of shape available. It only knows about the abstraction that is the Shape interface. To get the good code to handle a new shape, you do NOT need to open up the function and modify it. All you need to do is add new code, by writing a new class that implements the shape interface and that represents the new type of shape.

    So the good code is both open for extension i.e. it can be enhanced so that it handles a new type, and closed against the addition of new types of shape, because you don’t need to open it up and change it: it stays closed, but by adding new code (the new class) you extend its behaviour.

    Like

  19. Jon, you said: “Most of the principles seem pretty reasonable to me – but I’ve never “got” the open-closed principle”

    Odd since according to Kirk Knoernschild the other principles are derived from the OCP.

    E.g. #1 “We can think of LSP as an extension to OCP. In order to take advantage of LSP, we must adhere to OCP because violations of LSP are violations of OCP, but not vice versa.

    E.g #2 “DIP tells us how to we can adhere to OCP…if OCP is the desired end, DIP is the means through which we achieve that end…in order to adhere to OCP, we must first take advantage of DIP.”

    Like

  20. @Philip: Thanks for the detailed comments. I don’t think this is the best medium to have a full debate about them (Discourse, perhaps?).

    I still need to read through your comments more carefully (I’ve only skimmed so far) but one thing that nothing seems to change is that the broad statement of OCP itself is confusing, and uses words (particularly “open” and “closed”) in a way which is counter to their normal uses.

    As such, I wouldn’t *want* to write my own explanation of OCP – I’d rather start again from scratch than try to put another set of details behind a phrasing which I feel is fatally flawed to begin with.

    Like

  21. Jon, you said: “Likewise the meaning of “open for extension” seems unusual to me. I’d argue that the intuitive meaning is “can be extended” – where the use of the term “extended” certainly nods towards inheritance, even if it’s not intended meaning. If the idea is “we can make the module behave differently” – as Uncle Bob’s description suggests – then “open for extension” is a very odd way of expressing that idea.”

    Could it be that the reason you strongly associate the word extension/extending with inheritance is because:
    (1) In Meyer’s formulation of the OCP, implementation inheritance plays a key role
    (2) languages like Java and C# use the keyword ‘extends’ to signify implementation inheritance?

    Let me elaborate on these two points.

    (1) While implementation inheritance is central to Meyer’s formulation of the OCP, in Robert Martin’s formulation, the role of implementation inheritance is almost completely eliminated in favour of interface inheritance, in the same sense that the Gang of Four’s design patterns are largely about replacing implementation inheritance with interface inheritance.

    In ASDPPP, Martin says that the Template Method pattern, and the Strategy pattern are the most common way of satisfying the OCP. Template Method is less desirable because it one of the few patterns that uses implementation inheritance (in Martin’s words in ASDPPP: it only conforms to half of the DIP principle). Startegy is more desirable because like the majority of patterns it uses interface inheritance (it fully conforms to the DIP principle).

    Just in case you have not read the ‘Class versus Interface Inheritance’ section in the GoF’s Design Patterns book, here is my summary:

    * It’s important to understand the difference between an object’s class and its type.
    * An object’s class defines how the object is implemented (state and operation-implementation).
    * An object’s type only refers to its interface – the set of requests to which it can respond.
    * An object can have many types, and objects of different classes can have the same type.
    * Because a class defines the operations it can perform, it also defines the object’s type.
    * Languages like C++ and Eiffel use classes to specify both an object’s type and it’s implementation. Smalltalk programs don’t declare the types of variables.
    * It’s also important to understand the difference between class inheritance and interface inheritance (or subtyping). Class inheritance defines an object’s implementation in terms of another object’s implementation. In short, it’s a mechanism for code and representation sharing. In contrast, interface inheritance (or subtyping) describes when an object can be used in place of another.
    * It’s easy to confuse these two concepts, because many languages don’t support the distinction between interface and implementation inheritance. In languages like C++ and Eiffel, inheritance means both interface and implementation inheritance. […] In Smalltalk, inheritance means just implementation interface.
    * Although most programming languages don’t support the distinction between interface and implementation inheritance, people make the distinction in practice
    * Many of the design patterns depend on this distinction.

    In Robert Martin’s formulation of the OCP, we use mainly interface inheritance (but not exclusively – there are some cases in which implementation inheritance is used) to extend the behaviour of a class by adding new code rather than modifying the class.

    (2) In Dale Skrien’s great book, “OO design using Java” (btw don’t be put off by the word ‘Java’), he says that one way to measure the quality of a design is to analyse s/w with regard to the following properties, which he calls “the criteria for elegant software”:

    USABILITY – is it easy for the client to use
    COMPLETENESS – does it satisfy all the client’s needs
    ROBUSTNESS – will it deal with unusual situations gracefully and avoid crashing?
    EFFICIENCY – will it perform the necessary computations in a reasonable amount of time and using a reasonable amount of memory and other resources?
    SCALABILITY – will it still perform correctly and efficiently when the problems grow in size by several orders of magnitude
    READABILITY – is it easy for another programmer to read and understand the design and code?
    REUSABILITY – can it be reused in another completely different setting?
    SIMPLICITY – is the design and/or the implementation unnecessarily complex?
    MAINTAINABILITY – can defects be found and fixed easily without adding new defects?
    EXTENSIBILITY – can it easily be enhanced or restricted by adding new features or removing old features without breaking code?

    Is it unreasonable in your mind for Martin’s formulation of the OCP to use the word ‘extends’ in the above sense?

    Like

  22. @Philip: As I said before, I don’t think the comments here are a good medium for actual discussion. I don’t think it would be fruitful for me to try to reply to you point by point here.

    I suggest you either start a Discourse conversation (and include a reference here) or let me know if we’re likely to be at the same conference at any point – a conversation is likely to be much more fruitful.

    However, I’d ask you to keep bearing in mind that we’re talking about a principle which many, many people will hear about *without* having read every other thing that the authors have written about. If a summary gives the wrong impression until you’re read not just the single paper in front of you but also umpteen other books, I’d still say it’s a bad summary.

    Like

  23. Really worth reading Object-Oriented Software Construction, and understanding Bertrands ideas on OOP. Back in the very early 90s I learnt OO using Eiffel. As I see it, the ideas of OCP marries up with Design By Contract for making robust modular software (as well as other principles). The idea being you make these ecosystems of objects (which Eiffel was really big on) which have well defined thoughtfully designed contracts, you can then build on these objects to make more complex systems. It’s essential that these objects are well designed and stable, and hence conform to OCP. This is because you want to be able share these objects as a base for many kinds of software systems so other people can build robust software with whatever you come up with. So the code should be closed to change, except when there was bugs, but in general if you have well designed contracts, your software should be quite robust. Its like an engineering approach to software where you’d end up making these well designed objects that are **eternally** useful made to a particular specification and tolerance that were usable as parts in a bigger machine.

    The reality is, from greenfield, designing software like this was quite difficult. Implementation inheritance was a popular way to “extend”. Messes were made. So what was needed was a lot more practical advice on OO designs that worked well, the principles were too broad (but guiding). Then slowly practical advice appeared over the years. The GOF patterns being of a major practical help, and then eventually unit testing as a practical way of having executable coding contracts, and Refactoring for a way of morphing software without having to get it all correct with engineering precision upfront. Then with this base, in the 2000s a massive melting pot of ideas came along.

    However, the core principle of OCP lives on….. just having various postmodern reinterpretations :)

    Like

  24. You mention that in the Shape example there is only predicted variation. I would argue though, that the example also implicitly predicts something that won’t vary: the operations that can be done on a shape (eg “Draw”). If the point/location of variation was not “what shapes there are” but rather “what operations can be done on a shape” then the DrawAllShapes with a switch statement is clearly the better solution. It is OPEN for EXTENSION [of functionality] because new operations can be added (eg create a “measure area” function) and CLOSED for MODIFICATION because new shapes can’t be added easily. In other words, it’s not just a case of “more” or “less” variability, but also where you put it. In this case one type of variability sacrifices another type. You have to decide at design time whether the program is more likely to expand with new types of shapes, or new operations on a fixed set of shape types.

    Like

  25. I’m delighted that u never fully “got” OCP either.

    It did seem that an awful lot of people were repeating this mantra like (cough…cutting and pasting from the original source), and not really getting it themselves either!

    The PV explication makes much more sense to me.

    Like

  26. The following post by Kent Beck should help understand the Open-Closed principle:http://www.threeriversinstitute.org/blog/?p=242

    Here are some excerpts:

    The Open/Closed Principle always bothered me. I agree with it philosophically–good designs make it possible to add functionality without disturbing existing features–but in my experience there are no permanently closed abstractions.

    Ordinary design is the kind we do every day–extract a method, extract an object, move a bit of logic or state closer to where it belongs. The open/closed principle pretty much works. Superclasses sit. APIs sit. New features fit the design without much change.
    Then comes a feature that really doesn’t fit the design. The fundamental elements and relationships have to be twisted to implement it. The fact (feature) just doesn’t fit the theory (design).

    When the need for design change becomes apparent, software designers can isolate the part of the system that is to change from the part that will remain stable.

    Revolutionizing designs without first isolating change puts a bigger burden on the designer. The challenges of revolutionizing a design while working in safe steps is generally enough for me without adding the challenge of keeping track of changes all over the code base. Isolating change is low-risk and fairly mechanical, while giving me an overview of areas of the system that are about to be overhauled.

    Revolutionary design violates the open/closed principle, almost by definition. The feature you want to add needs new elements and relationships that don’t fit with the existing design. The basic abstractions need to be reopened to modification. Once the feature is added, they can close again. Further development can use the new elements and relationships as vocabulary for further extension. This extension takes place against a background of ordinary responsive design.

    Like

  27. @Haacked “Clearly it had to be called the “Open Closed Principle” to provide the “O” in “SOLID”.”

    Or really, how is “Liskov substitution principle” not clear to you?

    Like

  28. My programmer is trying to convince me to move to .net from PHP.

    I have always disliked the idea because of the costs.
    But he’s tryiong none the less. I’ve been using WordPress on a number of websites for about a year and am anxious
    about switching to another platform. I have heard excellent things about blogengine.
    net. Is there a way I can import all my wordpress content into it?

    Any kind of help would be greatly appreciated!

    Like

  29. @Godwin: Your question is inappropriate for this blog. You should ask it (with more details) on Stack Overflow. (There’s no reason why there has to be any cost associated with .NET, by the way.)

    Like

  30. I’m glad I’m not alone with the confusion on the precise meaning of OCP. It’s the only principle of SOLID that didn’t make sense to me.

    Like

  31. Maybe I am misunderstanding, but OCP seems very obvious. Here is an analogy: you may add a satellite radio to the dashboard of a car, but you can’t change the steering wheel. But if you are trying to turn the car into a helicopter, all bets are off, and you end up with a control stick (and lots of retaining to use it). Is it more complex than that?
    I think the best principle is the Lanai: make things simple and cheap enough that when you inevitably need to start all over, you used the minimal amount of effort last time, and will again this time. (Traditional Hawaiian homes are built so that when they blow down, they don’t hurt anyone and can be rebuilt quickly.)

    Like

  32. FWIW, I’ve rewritten the Meyer section in the Wikipedia article (https://en.wikipedia.org/wiki/Open/closed_principle#Meyer.27s_open.2Fclosed_principle), with quotes. (Finally an excuse to actually crack the copy of Object-Oriented Software Construction I picked up on the cheap a few years ago!)

    “A module will be said to be open if it is still available for extension. For example, it should be possible to add fields to the data structures it contains, or new elements to the set of functions it performs.

    “A module will said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding).”

    From what I can tell, Meyer’s definitions of “open” and “closed” are actually quite sensible, but the conclusion he draws (only add functionality by extending classes with implementation inheritance) seems to be a relic of the days before dynamic loading.

    Martin’s definitions are still gobbledygook, though.

    Like

  33. Hey – thanks for taking the time to tackle this, most appreciated. Just a little feedback, seems you’ve managed to commit the ‘introduce abstract terms’ offence you detest, by introducing the terms ‘constrictive’ and ‘wooly’:

    “…create a stable interface which is neither too constrictive for implementations nor too woolly for clients”

    Like

  34. KISS?

    If something needs this much energy to explain refactor it!

    What is it trying to achieve? : That we should be able to deliver new requirements without unexpectedly break calling programs. (and as a side-effect not sacrificing the ability to refactor at will – the most important thing us developers engage in).

    So, we maintain integration tests with as near to 100% coverage as possible. And where we need to make a breaking change without being able to update all clients, we version our APIs.

    Like

Leave a comment