A place to cache linked articles (think custom and personal wayback machine)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

index.md 6.6KB

5 lat temu
123456789101112131415161718192021222324252627282930313233343536373839404142434445
  1. title: Powers of Two
  2. url: https://www.benrady.com/2017/12/powers-of-two.html
  3. hash_url: 820db5ac496cd40f5f1aa7df5c50b557
  4. <p>There are a few "best practices" that I've been able to do without, that I previously thought were absolutely essential. I would think that's a function of a few different factors, but I'm curious about one in particular.</p>
  5. <p>I've worked on large and small teams before, but I'm currently working closely with just one other developer. I thought I'd try to list all the things that we don't have to do anymore, to see if there's any sort of process/value inflection point when you have <em>exactly two</em> developers.</p>
  6. <p>For context, let me explain what we've been doing. It's not revolutionary, or even particularly interesting. If you squint it looks like XP.</p>
  7. <p><strong>We sit next to our users.</strong> It gets loud sometimes, but it's the best way to to stay in touch and understand what's going on.</p>
  8. <p><strong>We pair for about 6 hours a day, every day.</strong> Everything that's on the critical path is worked on in a pair. Always. Our goal is always to get the thing we're working on to production as fast as we responsibly can, and the best way I've found to that is with a pair.</p>
  9. <p><strong>We practice TDD. </strong>Our tests run fast (usually 1 second or less, for the whole suite) and we run them automatically on every change as we type. We generally test everything like this, except shell scripts, because we've never found a testing approach for scripts that we liked.</p>
  10. <p><strong>We refactor absolutely mercilessly. </strong>Every line of code has to have a purpose that relates directly back to value to the company. If you want to know what it is you can generally comment it out and see which test (exactly one test) fails. We don't go back and change things for the sake of changing them, though. Refactoring is never a standalone task, it's always done as part of adding new functionality. Our customers aren't aware if/when we refactor and they don't care, because it never impedes delivery.</p>
  11. <p><strong>We deploy first, and often. </strong>Step one in starting a new project is usually to deploy it. I find that figuring out how you're going to do that shapes the rest of the decisions you'll make. And every time we've made the system better we go to production, even if it's just one line of code. We have a test environment that's a reasonable mirror of our prod environment (including data) and we generally deploy there first.</p>
  12. <p>Given all that, here's what we <em>haven't</em> been doing:</p>
  13. <p><strong>No formal backlog. </strong>We have three states for new features. <em>Now</em>, <em>next</em>, and <em>probably never</em>. Whatever we're working on now is the most valuable thing we can think of. Whatever's next is the next most valuable thing. When we pull new work, we ask "What's next?" and discuss. If someone comes to us with an idea, we ask "Is this more valuable that what we were planning to do next?" If not, it's usually forgotten, because by the time we finish that there's something else that's newer and better. But if it comes up again, maybe it'll make the cut.</p>
  14. <p><strong> </strong></p>
  15. <p><strong>N</strong><strong>o project managers/analysts.</strong> Our mentality on delivering software is that it's like running across a lake. If you keep moving fast, you'll keep moving. We assume that the value of our features are power-law distributed. There are a couple of things that really matter a lot (now and next), and everything else probably doesn't. We understand a lot about what is valuable to the company, and so the responsibility for finding the right tech&lt;=&gt;business fit best rests with us.</p>
  16. <p><strong>No estimate(s).</strong> We have <em>one</em> estimate: "That's too big" Other than that, we just get started and deliver incrementally. If something takes longer than a few days to deliver an increment, we regroup and make sure we're doing it right. We've only had a couple of instances where we needed to do something strategically that couldn't be broken up and took more than a few weeks.</p>
  17. <p><strong>No separate ops team.</strong> I get in a little earlier in the day and make sure nothing broke overnight. My coworker stays a little later, and tends to handle stuff that must be done after hours. We split overnight tasks as they come up. Anything that happens during the day, we both handle, or we split the pair temporarily and one person keeps coding.</p>
  18. <p><strong>No defect tracking.</strong> We fix bugs immediately. They're always the first priority, usually interrupting whatever we're doing. Or if a bug is not worth fixing, we change the alerting to reflect that. We have a pretty good monitoring system so our alerts are generally actionable and trustworthy. If you get an email there's a good chance you need to do something about it (fix it or silence it), and that happens right away.</p>
  19. <div>
  20. <p><strong>No slow tests.</strong> All of our tests are fast tests. They run in a few milliseconds each and they generally test only a few lines of code at once. We try to avoid overlapping code with lots of different tests. It's a smell that you have too many branches in your code, and it makes refactoring difficult.</p>
  21. </div>
  22. <p><strong>No integration tests. </strong>We use our test environment to explore the software and look for fast tests that we missed. We're firmly convinced this is something that should not be automated in any way....that's what the fast tests are for. If we have concerns about integration points we generally build those checks directly into the software and make it fail fast on deployment.</p>
  23. <p><strong>No CI/Build server.</strong> The master branch is both dev and production. We also use git as our deployment system (the old Heroku style), and so you're prevented from deploying without integrating first...which is rarely an issue anyway because we're always pairing.</p>
  24. <p><strong>No code reviews.</strong> Since we're pairing all the time, we both know everything there is to know about the code.</p>
  25. <p><strong>No formal documentation.</strong> Again, we have pairing, and tests, and well written code that we can both can read. We generally fully automate ops tasks, which serves as its own form of documentation. And as long as you we search through email and chat to fill in the rest, it hasn't been an issue.</p>
  26. <p>Obviously, a lot of this works because of the context that we're in. But I can't help but wonder if there something more to it than just the context? Does having a team of two in an otherwise large organization let us skip a lot of otherwise necessary practices, or does it all just round down to "smaller teams are more efficient?"</p>