Note: this is a geeky coding post, so please stop reading unless you care about this sort of thing.
I remember my own mistakes of the past. I used to use one trick with object oriented programming (still reading?). I used to use inheritance. There were a lot of reasons for this, and I blame Microsoft - not in a "Windows is shit" kind of a way, but more because they made a lot of use of inheritance in examples and automatically generated code, so it became the common tool for any job.
When you have a hammer, everything's a nail, right?
Well, time's moved on for me, and I've learned of two other techniques:
There are more techniques, of course... but these two are the ones which seem to be forgotten. If everything has to be a type, then you end up with types that basically provide static data via dynamic getters, which could be replaced by having the data stored in the object and used with static getters.
Mind you, there are reasons when you want to do things the opposite way around. If you're using patterns where there are tons of objects, and you want them to be virtually stateless, then the type can dictate the notional state, since a single type identifier can imply a lot of data.
Let's not even mention the Liskov substitution principle. Ok. Let's mention it. A is set to be of type B, if you can use A in any place where you can use a B. This defines inheritance, but kind of assumes that you haven't made A undo some of B's behaviour. So if B always does a thing, and A makes it not happen, then you've got something which defies the idea of additive inheritance. It's sometimes a necessary evil.
Right now, I've got some classes on my screen where there's a template, which has all manner of possible data items. Somehow, rather than implementing a single type where these items can be easily turned on and off, the implementing person has created a type for each possible permutation, which duplicate bits of each other. In this situation, there's a single type and just variants of how to configure it... multiple types don't help.
Ockham's razor, in Latin, reads - Pluralitas non est ponenda sine neccesitate, or Entities should not be multiplied unnecessarily. So today's job is having fewer duplications, entities, types and nails being hit by hideous hammers.
I remember my own mistakes of the past. I used to use one trick with object oriented programming (still reading?). I used to use inheritance. There were a lot of reasons for this, and I blame Microsoft - not in a "Windows is shit" kind of a way, but more because they made a lot of use of inheritance in examples and automatically generated code, so it became the common tool for any job.
When you have a hammer, everything's a nail, right?
Well, time's moved on for me, and I've learned of two other techniques:
- HAS-A - sometimes you can use an instance of another object to DO what you want, rather than have everything become something else, just to use some behaviour
- State vs type - it's not necessary to provide the answer to certain parameters by overriding the getters for that data, you can just store the data in the object which needs to be configured.
There are more techniques, of course... but these two are the ones which seem to be forgotten. If everything has to be a type, then you end up with types that basically provide static data via dynamic getters, which could be replaced by having the data stored in the object and used with static getters.
Mind you, there are reasons when you want to do things the opposite way around. If you're using patterns where there are tons of objects, and you want them to be virtually stateless, then the type can dictate the notional state, since a single type identifier can imply a lot of data.
Let's not even mention the Liskov substitution principle. Ok. Let's mention it. A is set to be of type B, if you can use A in any place where you can use a B. This defines inheritance, but kind of assumes that you haven't made A undo some of B's behaviour. So if B always does a thing, and A makes it not happen, then you've got something which defies the idea of additive inheritance. It's sometimes a necessary evil.
Right now, I've got some classes on my screen where there's a template, which has all manner of possible data items. Somehow, rather than implementing a single type where these items can be easily turned on and off, the implementing person has created a type for each possible permutation, which duplicate bits of each other. In this situation, there's a single type and just variants of how to configure it... multiple types don't help.
Ockham's razor, in Latin, reads - Pluralitas non est ponenda sine neccesitate, or Entities should not be multiplied unnecessarily. So today's job is having fewer duplications, entities, types and nails being hit by hideous hammers.
2 Comments:
And let's not even talk about multiple inheritance, which my hazy memory tells me you were a huge fan of... :-)
Why not do it in style!?
I could probably justify that design choice in a job interview situation, but I'm not sure I'd do it that way if I had my time over.
A beer and a whiteboard discussion, methinks.
Post a Comment
<< Home