Source originale du contenu
I just discovered ‘The TDD That Can be Spoken Is Not the Eternal TDD’ by Charles Hoffman and I was startled by the insight there. Most of what Chuck wrote about resonated strongly with me. I have traveled the TDD path for 10 years now. TDD sounded great to me, I did QA for years and still had the horrors fresh in my mind. I wanted to squash every bug before it made it into a build. Unfortunately, my initial predisposition was to write simple tests that generated simple code. Really simple code that contained few abstractions. Really simple tests that were intertwined with the class internals. And once you have a few thousand itty bitty tests in place you sure don’t feel like you’re moving any faster when you have to change a hundred of them to refactor your code to use very important domain concept X. I also was the only (and a junior) person on the team that was trying to TDD as opposed to just throwing a few integration tests on at the end. So when my tests broke, people got grumpy.
I eventually got better and found it is a highly intuitive practice, because it is a design practice and not a testing practice. It can only lead you to good, simple, workable code with practice and a constant awareness of the design directions it leads you in. If you’re writing tests for the sake of code coverage you’re going to create brittle tests. If you’re writing tests to help you design your abstractions then it’s just a more rigorous way of writing top-down code. You get to see the code from the caller’s perspective first, which encourages you to make his life easier, rather than mirroring a class’s internals to its outside edge.
I think I understand why TDD is presented as dogma, the initial resistance seems so high that those that champion it don’t know another way to get it in the door. And like pairing, it might be one of those things that requires a suspension of disbelief to get. But TDD practiced as dogma is wasteful. I’ve been on teams where you were shamed if there were methods without tests. Now, if you’re following TDD, the third step is refactor, and a pretty useful refactoring is Extract Method. I have a design, I test-drove it, and now I’m improving it. The code is by definition tested and test-driven. Would writing tests for that method in any way change the design? My experience has been that dogmatically making a unit test file mirror the code file is terrible for code maintenance. It’s great for navigability and discoverability, but I don’t value those things nearly as much as the ability to change my code with as little ceremony and friction as possible.
There are so many things that rarely provide a net payoff in energy expenditure when you try to test them dogmatically. I believe there’s a finite amount of time and energy to do anything and those tradeoffs require experience and intuition to identify. So we start building this implicit list of exclusions as TDDers, but my list is probably not the same as yours. Which is fine, because I do believe there is a certain individualistic style to design, but it makes it hard for others to learn. “Why aren’t we testing this?” “We just spent 20 minutes testing something that looked just like it!”. It’s hard to articulate all the lessons learned and hours spent fighting the design or the testing tool. So, Chuck is right, it’s foolish to say you’re either doing TDD or you’re wrong. There are different levels of “doing TDD” and many people find them less efficient than thinking hard about the problem and solving it elegantly that way. There are countless examples of programs that were well designed without TDD. The one thing I do think that TDD gets right is that it’s a gateway to good design. I don’t have to spend 3 weeks at the whiteboard guessing what the right design is. I can slice off this one part and get it in place in an hour. When I’m tired or distracted I don’t have to load the whole system in my head to make progress on it. Let me get this test to pass. And then this one. And then this one. Well how bout that, I just knocked out a major piece of functionality before lunch. TDD fits the way I like to work on infinite tasks; if you pick up your room for 10 minutes every day you don’t have to spend an entire weekend cleaning it. But on the other hand, my wife does amazing work when she lets it all pile up at the end. That pressure to perform produces amazing results, usually better than my slow and steady approach. But I’m ok with that, I don’t want to dump all of my energy into integrating with an external web service.
I do disagree a bit with one statement from the article: “I don’t care for the idea that you cannot be considered a professional developer if you don’t practice TDD (and by whose standard/definition of TDD anyway?).” I can’t speak for Uncle Bob, but my core measure for professionalism is that the code works and is easy to understand and change. I view professionalism like any other trade - if the walls fall down when I hang a picture, you’re an unprofessional carpenter. You might have had bad materials, schedule pressure, or lazy subcontractors, but I’m not hiring you for that job again. I think people in software are a little too quick to absolve themselves from responsibility, and that’s the core insight I take out of Uncle Bob’s “professionalism” stance. Now, if you can design systems that work well and embrace change then I think you’re a professional. If you can do that without TDD, as fast as I can with TDD, please bring me up to your level.
I think it is very fair to say that we who are paid to develop software may not be professionals yet. That’s the key attraction of the craftsmanship model to me, it explicitly points out that we’re on a journey. That’s hard to swallow if you’re a genius out of college, or a team lead on a massive project, but most of us need more help, more training, and more practice. And I think it’s true that not every project needs to be crafted by a team of experienced masters. If my garage needs a workbench, I can certainly build that myself (inefficiently and with noticeable flaws). It’s the right decision for that project.
I was impressed with Chuck’s insights into the top-down style of TDD. It is hard to learn how to do top-down TDD. You’ve trained for years to think about all those fiddly bits that are happening at the the lowest level. It’s really difficult to have the confidence that you can work all that out later and you can just pretend for now. One thing that helped me was shifting my perspective from the idea that “I’m testing this code works” to “I am building the client of this code”. In that way, every test is a top-down test. It is hard to listen to your tests and realize that your awesome design was in reality overly-coupled or depended on some magic that’s hard to test around. To get all these awesome benefits, you do have to buy in that hard-to-test is the same as inadequately designed, and I’m not sure how you make that argument without resorting to name-calling. There’s plenty of evidence that this is a good way to build software and there’s plenty of evidence that perfectly TDD’d software is a myth.
I’d also like to +1 the idea that teams new to TDD should start with acceptance tests. It does seem counter-intuitive, if you’re familiar with the testing pyramid; we tend to put a lot of unit tests at the bottom and a few acceptance tests at the top. Who starts a pyramid at the top? From personal experience, those slow, inexact, finicky tests have saved my butt dozens of times. If you’re working on a system with no-tests or few tests, those can be the scaffolding that let you build the rest of your automated test suite. When refactoring legacy code, those tests are the ones that point out you didn’t understand/test all the side effects as well as you thought. This is the way all tests show their true value, not by verifying the code is correct, but by showing you where the pain is.