Source originale du contenu
I grew up writing a lot of code. I programmed games and web apps and editors and all kinds of stuff. And often I would come to a point where I would see a pattern, and I would realize that “Wow, I can simplify this; if I just generalize this class or build this Framework it would be so much easier to accomplish what I want to do!”. It happened all the time. I would write Game Engines when I was trying to write games, or forum software when I was trying to start a forum. To integrate with Lua I wrote a reflection abstraction on top of C++ (“which could potentially be used for all kinds of stuff, like serializing objects for instance”). I loved writing beautiful, generic code, that could easily be re-used in the future.
I kept doing this for a long time. I kept generalizing stuff, I kept trying to make things generic. And eventually I learned, the hard way, the truth about generalization: It is (almost always) a waste of time.
Wait what? Isn’t generalization the holy grail of programming? You unify a bunch of stuff and make it into one, thereby reducing code duplication and exposing a unified interface which will be so easy for anyone to use? It makes so much sense, right?
Almost always, the answer to that will be: Nope. At first, I only saw this from empirical data. Most of the times when I tried to generalize I would just end up losing time, even though it was supposed to save me time. It didn’t make any sense to me, but it kept happening over and over. Slowly, I started to realize why.
Generalization is, in fact, prediction. We look at some things we have and we predict that any current and following entities in that group will look and behave sufficiently similar in the future to what we have now. We predict that we can write something that will cater to all, or most, of its future needs. And that makes perfect sense, if you just disregard one simple fact of life: humans are awful at predicting the future!
(To make it worse; sometimes we look at one entity in a potential group and generalize out of that. Sometimes we even look at zero entities!)
Programming is not math. Programming is chaotic, it takes turns and twists and you never end up quite where you thought you would. Even when you are the only developer on a project, you still can’t predict where things will go. Needless to say this becomes even more complex when you add more people to the mix, or perhaps external stakeholders with changing requirements. Programming is hard enough trying to solve the problems at hand, when we start adding in solving future problems too it becomes incredibly complicated.
Now I learned all this long before I joined the industry. I assumed that everyone in the industry would surely know all this, that it was just me being slow to learn. But when I joined the industry I quickly learned that most of us do not have a clear understanding of this, at all. Even expert programmers, really smart people that have been working for years in the industry, keep falling into this trap.
Just today I had a discussion with a coworker and friend — one of sharpest persons I know — who was trying to figure out how to generalize a component so that other people could also use it in the future. I urged him with every argument I could come up with to not make it easy for other people to use. Make it specific. When we make it generic we start losing control over it. Other people start using it. Their requirements will look identical to ours in the beginning but over time they will diverge and eventually we will end up with a complicated mess that doesn’t quite fit anyone.
There’s a simple rule of thumb: resist those urges that tell you to generalize, just ignore them, put them off. Put them off until it becomes so blatantly obvious that this thing here really really needs to be “A Thing” that it just makes no sense to not generalize. Then wait some more.
This is especially true if what you are doing easily can be replicated by someone else. Don’t share things that can be easily replicated. If it’s easy to replicate, then the overhead of communicating and keeping everything sync is going to be a lot more work than the actual writing of the code.
Or perhaps from a more technical standpoint: the whole idea of generalizing a task is that you spend a little more time now and then save a lot later. That would be fine and dandy if we always predicted the future tasks perfectly, but we don’t, so in reality it usually looks more like this: do task A, and add the time for generalizing that, then do task B and add the time to fix the generalization you did in the first place. A lot of the time the fixing of the generalization just means throwing it out since you realized that it won’t work at all for the B case. In those cases the generalization was just purely added work with no benefit, and those cases are extremely common.
Now to be clear, I am not arguing against all forms of generalization. Without generalization we wouldn’t have the web (imagine every website implementing their own network protocol), we wouldn’t have standard libraries and compilers (which can be used to build a lot on top of) and tons of other stuff. But unless you are writing the next HTTP or a compiler or any other huge piece of infrastructure, I would advise to be on the guard against generalization (and most of us are not working on building the future of infrastructure).
I also want to make it clear that I’m not advocating against writing libraries. Libraries are great, we all depend on them for so many different things. But one of the reasons they are so usable is that a good library doesn’t generalize. Instead it specializes. A well written library tries to solve one (or a small number of) problems really really well, with clearly defined scope and limitations. A poorly written one tries to solve all future problems of all future customers and ends up solving none.
To wrap this all up; how did all this affect me and my programming? I’d say that the realization that generalization almost always is a waste of time was my by far largest productivity jump as a programmer. Before it, I spent endless hours doing things that in the end always proved to be a waste of time. After, I found it much easier to focus on what I actually needed to do. I stopped trying to predict the future, and spent that time moving towards the future instead, and once there I would realize why what I was trying to do wouldn’t work. It also dramatically changed what I got pleasure from; where I before got pleasure from generalizing things, I now only feel the time running through my fingers when I see people trying to generalize. I became much more end-product oriented and took more pleasure from finishing something rather than perfecting something unfinished.
Friends. Colleges. Strangers. All of us in the community. I plead to you all, please, please, stop trying to predict the future, and start working with the present instead.
Thank you!
(On a side note; generalization isn’t just a problem in the programming world, and there’s a ton of new(-ish) literature that deals with this. I would urge anyone interested in the topic to read Daniel Kahnemans “Thinking fast and slow” and Nassim Talebs “Antifragile”.)