A place to cache linked articles (think custom and personal wayback machine)
您最多选择25个主题 主题必须以字母或数字开头,可以包含连字符 (-),并且长度不得超过35个字符

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310
  1. title: The Myth of Developer Productivity
  2. url: http://www.dev9.com/article/2015/1/the-myth-of-developer-productivity
  3. hash_url: 1d5da5d83b72ec3e5392d3376c5a1e20
  4. <p><em>How's that for a click-bait title?</em></p>
  5. <p>If there is a holy grail (or white whale) of the technology industry, especially from a management standpoint, it's the measurement of developer productivity. In fact, there is a very common phrase, "you can't plan if you can't measure." Measurement works so well in many other industries that involve humans -- building construction, manufacturing, road work. We are able to get rather accurate estimates for both cost and completion date, so why not software?</p>
  6. <p><em>If you're a manager, you're going to read a lot of discouraging information here. However, if you make it to the end, I promise we'll give you tools and tips to gain efficiencies. All is not lost.</em></p>
  7. <h1 id="whymeasure">Why Measure?</h1>
  8. <p>We as developers love to play along with this. So much of what we work with is data-driven feedback. We can analyze with <a href="https://www.ej-technologies.com/products/jprofiler/features.html">profiling</a>, <a href="http://mojo.codehaus.org/javancss-maven-plugin/">complexity</a>, <a href="http://en.wikipedia.org/wiki/Conversion_rate_optimization">conversion rates</a>, <a href="http://en.wikipedia.org/wiki/Conversion_funnel">funnel metrics</a>, <a href="http://www.patrick-wied.at/static/heatmapjs/">heat maps</a>, <a href="http://blog.eyequant.com/2014/01/15/the-3-most-surprising-insights-from-a-200-website-eye-tracking-study/">eye-tracking</a>, <a href="http://en.wikipedia.org/wiki/A/B_testing">a/b testing</a>, <a href="http://en.wikipedia.org/wiki/Fractional_factorial_design">fractional factorial</a> <a href="http://en.wikipedia.org/wiki/Multivariate_testing_in_marketing">multivariate analysis</a>, etc. All of these things give us data upon which we can prioritize future efforts. It only makes sense that we should be able to measure ourselves.</p>
  9. <h1 id="measuringdevelopers">Measuring Developers</h1>
  10. <p>Measuring and managing developer productivity, however, has consistently eluded us. So many of the tools we use are designed to increase developer productivity: XP, TDD, Agile, Scrum, etc. There were academic papers analyzing software project failures/overruns <a href="https://lab.cs.ru.nl/laquso/images/a/a8/E0582.pdf">in the 80s</a>. This isn't a new phenomenon by any means. We also famously hear of IT failures in the news, such as:</p>
  11. <p>These are just a few cases. There are likely dozens or hundreds of errors on this scale every year, and likely hundreds to thousands of projects in the &lt;= $1m range. A lot of this is due to a lack of good testing. We at Dev9 have frequently espoused the benefits of automated testing, and it has real benefits.</p>
  12. <p>However, quite a few others are caused by planning and estimation that missed the mark. There are estimates that say IT organizations will spend over $1t per year on their IT initiatives. Notice it's trillion, not billion. A trillion dollars. Given this extremely high cost, anybody who found a way to reliably gain efficiencies of even 1% would save a billion ($1,000,000,000) dollars. That's a lot of zeroes. </p>
  13. <h2 id="the10xdeveloper">The 10x Developer</h2>
  14. <p>There is a theory floating around, and largely backed up by data, that <a href="http://programmers.stackexchange.com/questions/179616/a-good-programmer-can-be-as-10x-times-more-productive-than-a-mediocre-one">the best developers among us are 10x more efficient than the worst ones</a>. Given that developer salaries do not reflect this order-of-magnitude difference (Who is the last senior dev you knew who made $800k/yr?), it's obviously a bargain for companies if they can find one of the 10x, and hire them at a comparable rate to a 1x or 2x person. These studies even gave birth to analysis that showed, "...[T]he top 20 percent of the people produced about 50 percent of the output (Augustine 1979)." If you were a manager looking to cut costs, you'd want to get rid of 80% who produced only 50% of the output, and hire only the kind of people who are in that top 20%.</p>
  15. <h2 id="highperformers">High Performers</h2>
  16. <p>However, that quote I gave you is not the full quote. It actually is, "This degree of variation isn't unique to software. A study by Norm Augustine found that in a variety of professions--writing, football, invention, police work, and other occupations--the top 20 percent of the people produced about 50 percent of the output, whether the output is touchdowns, patents, solved cases, or software (Augustine 1979)."</p>
  17. <p>This problem is not a software-specific problem. Any field that requires human decision-making is subject to variation. Some people are going to be naturally talented in the field. Some have the perfect personality for the job. Some people are voracious readers, others never try to learn after school. Some consistently push their bounds, while others are content to be competent. Some people's brains just work differently. Some people's bodies just work differently. It doesn't take a genius to see that some football/soccer/hockey players are dramatically better than others, even though they both train the same amount of time. Why would software development be any different? Why should it?</p>
  18. <h1 id="traditionalmeasures">Traditional Measures</h1>
  19. <p>Before we continue onward, let's look at some of the ways the industry has tried to quantify development activities, and why they fall short for measuring productivity. The tl;dr of this section is that any metric you come up with to measure developers will be gamed.</p>
  20. <h2 id="hoursworked">Hours Worked</h2>
  21. <p>This is one of the most obvious ones: butt-in-seat time. If you worked 10 hours instead of 8 hours, you should get 125% of the work done. That's just math. Time and time again, you'll see studies proving that <em>this just does not work for anyone</em>. In fact, running hot on hours is a great way to <em>decrease</em> productivity.</p>
  22. <p>Time and time again, we see proof that more than 40 hours <em>necessarily</em> leads to a drop of productivity, <strong>even for assembly line workers</strong>. Yet, this pervasive attitude of 8-6 being a minimum workday continues to chug along.</p>
  23. <p>I was once on a team where the managers were so addicted to tracking hours as a measure of productivity that we started putting meetings, lunches, and bathroom breaks on the board every sprint. Otherwise, we were accused of not working hard enough because our hours didn't exactly add up to 40 or more. This absolutely destroyed the morale of the team. "Don't forget to put your hours in" causes me to involuntarily twitch.</p>
  24. <h2 id="sourcelinesofcodesloc">Source Lines of Code (SLOC)</h2>
  25. <p>Lines of code. What a perfect measure. Even if they think different and whatnot, we can just track lines of code, and use that to extrapolate.</p>
  26. <p>There are so many problems with this metric that it is actively harmful to use it to judge developers:</p>
  27. <ul>
  28. <li>Developers can just add extra lines of code to pad their numbers</li>
  29. <li>A 200-line solution may be faster or more performant than a 1000-line solution to a problem</li>
  30. <li>Sometimes the solution is to delete code</li>
  31. <li>5000 lines of buggy code is worse than 1000 lines of bug-free code.</li>
  32. <li>Developers copy-paste code instead of refactoring, leading to massive technical debt and poor design, as well as significantly increased bug probability.</li>
  33. </ul>
  34. <p>This is an interesting metric to track in aggregate to get a sense of the size and complexity of the system, but not useful at an individual level.</p>
  35. <h2 id="bugsclosed">Bugs Closed</h2>
  36. <p>This one is so crazy, Dilbert has a comic on it:</p>
  37. <p><img src="http://i.imgur.com/EvQ0Ung.gif" alt="Dilbert" title=""/></p>
  38. <p>If you do this, you're the pointy-haired boss from Dilbert.</p>
  39. <h2 id="functionpoints">Function Points</h2>
  40. <p><a href="http://en.wikipedia.org/wiki/Function_point">Function points</a> found a small following out in the world. You've probably never heard of them. It's practically impossible for a lay-person to digest. If you want to try to measure function points for your project, then <a href="http://conferences.embarcadero.com/sv/article/32094">give this article a read</a> and figure out how to automate it in your project.</p>
  41. <p>Go ahead, try it. I dare you.</p>
  42. <h2 id="defectrate">Defect Rate</h2>
  43. <p>The idea of this one is to measure the number of defects each developer produces. This does seem reasonable, and you should probably track it, but here's why it's a bad measure of productivity:</p>
  44. <ul>
  45. <li>It favors bug fixes over feature development.</li>
  46. <li>It discourages developers from tackling larger projects. Would you rather try the "Add a form field to this existing page" project, or the "Implement a real-time log analysis system from scratch" project?</li>
  47. <li>Not all bugs are created equal:
  48. <ul><li><strong>Bug 1:</strong> When somebody uses the "back" button, a bug deletes all customer data on the production website.</li>
  49. <li><strong>Bug 2:</strong> Form fields are not left-aligned</li>
  50. <li><strong>Bug 3:</strong> If a customer enters dates that span 2 leap years, the duration calculation is off by 1 second.</li></ul></li>
  51. <li>People often mistake features for bugs. Missing requirements are not a bug, but may be filed as such.</li>
  52. <li>There may be multiple bug reports related to 1 bug.</li>
  53. <li>Developers will never touch anybody else's code, and will get very aggressive about protecting their code.</li>
  54. </ul>
  55. <p>Defect rates are interesting, but they're not enough to give you an idea of productivity.</p>
  56. <h2 id="accuracyofestimation">Accuracy of Estimation</h2>
  57. <p>Estimation, my least favorite activity. I have no problem taking a swing a how long something will take. However, at <em>every single company I've ever worked for, estimates become commitments.</em> If you say "this will take about 3 days," you get in trouble if it takes longer than 3 days. On the other hand, if you finish ahead of schedule, you get praised. This encourages developers to estimate given an absolute worst-case scenario. Like, "neutrino streams from solar flares corrupting random bits on our satellite stream that somehow passed checksum validation but is still corrupted and we wrote that to our hard drive" kind of worst-case scenarios.</p>
  58. <p>Other reasons this metric is a problem:</p>
  59. <ul>
  60. <li>If you estimate in "ideal hours," distractions may turn that 8-hour task into 3 days.</li>
  61. <li>Developers can be overly and inconsistently optimistic with their estimations.</li>
  62. <li>The scope was not adequately defined, or not defined at all.</li>
  63. <li>The customer was asking for something that is impossible, which could only have been discovered at coding time.</li>
  64. </ul>
  65. <p>There is one more reason, bigger than those four combined. Look for the section "Developer Productivity is a Myth."</p>
  66. <h2 id="storypoints">Story Points</h2>
  67. <p>Story points -- we thought we had found the holy grail. Story points were explained as a measure of effort and risk. If we have consistent story points, and figure out how many story points each developer finishes per sprint, then we can extrapolate developer performance. Let's see what happens:</p>
  68. <ol>
  69. <li>If they finished less than they did last sprint, they're chastised. They are again reminded that they committed, no matter what. Even if you had to help a prod issue, or were in a car accident, or got sick -- you <strong><em>committed</em></strong>. So developers start sandbagging to avoid this.</li>
  70. <li>If they finished exactly right, the managers will think the developers finished early and were sitting idle, or were padding their estimates. This leads to frustration and resentment. Alternatively, a perfect finish might be seen as a state where, if everybody worked a few more hours, we'd see more output.</li>
  71. <li>If they finish with more points than they took on, managers will accuse the developers of sandbagging. Then they told that they must accept more points next sprint, to take this into account. That, or you have a "level-setting meeting" where everybody re-agrees what the points represent. This leads to frustration and resentment, not to mention the drop in productivity related to figuring out the new point system.</li>
  72. </ol>
  73. <p>If a manager asks for doubled productivity, that's easy: double the story-point estimate.</p>
  74. <p>Story points also aren't consistent between developers. Even if everybody agrees that it's a 3-point story, based purely on effort and risk, the wall-time delivery will be different depending on who picks it up. One developer who is intimately familiar with that code may be able to finish in 2-3 hours, while a new junior developer may struggle for 1-2 days. This is proof that we've decoupled productivity from points, and why it's a bad metric.</p>
  75. <p><a href="https://www.scrum.org/Forums/aft/590">On the official Scrum forums, practioners always have to explain why story points are not a measure of productivity</a>. The Scrum Alliance even has a whitepaper called <a href="https://www.scrum.org/Portals/0/Documents/Community%20Work/The-Deadly-Disease-of-the-Focus-Factor.pdf">The Deadly Disease of Focus Factors</a>, and here is the opening statement of the document:</p>
  76. <blockquote>
  77. <p>To check your organizational health,
  78. answer these two questions:</p>
  79. <p>1) Do you estimate work in “ideal” hours?</p>
  80. <p>2) Do you follow up on your estimates, comparing it to
  81. how many “real” hours work it actually took to get
  82. something done?</p>
  83. <p>If so, you may be in big trouble. You are exhibiting
  84. symptoms of the lethal disease of the “focus factor”. This
  85. is how the illness progresses:</p>
  86. <p>Speed of development will keep dropping together with quality. Predictability will suffer. Unexpected last moment problems and delays in projects are common. Morale will deteriorate. People will do as they are told, but little more. The best people will quit. If anything gets released it is meager, boring and not meeting customer expectations. As changes in the business environment accelerate, the organization will be having trouble keeping up. Competitors will take away the market and eventually the end is unavoidable. </p>
  87. </blockquote>
  88. <p>So even the people who invented the concept tell you explicitly not to use story points as a measure of developer productivity. So stop it.</p>
  89. <h1 id="developerproductivityisamyth">Developer Productivity is a Myth</h1>
  90. <p>"You can't plan if you can't measure." This is an idea still taught in business school, it's a mantra of many managers, and it's wrong in this context. It assumes everything a developer does is objectively and consistently measurable. As we've shown above, there still doesn't exist a reliable, objective metric of developer productivity. I posit that this problem is unsolved, and will likely remain unsolved.</p>
  91. <p>Just in case you think I'm spouting nonsense, just remember: the smartest minds of Microsoft, Amazon, IBM, Intel, Wall Street, the Bay Area, Seattle, New York, and London still haven't found that magical metric. It is, therefore, a rather safe assumption that the average company also hasn't found it. If you believe you have proven me (or them) wrong, go ahead and publish it. You'll be a wealthy rockstar of the programming universe. People will write books about your life and your brilliance.</p>
  92. <p>We all know that some people are better than others. Developers can identify which developers are better, but there is not a number or ranking system we can come up with, objectively based on output, that consistently and reliably ranks developers. Let's explore why.</p>
  93. <h2 id="adevelopersjob">A Developer's Job</h2>
  94. <p>Most people don't understand what developers do. We clicky-clack on electronic typewriters while drinking Mountain Dew and eating Doritos in the dark, and make the magic blinky boxes show cute cat pictures.</p>
  95. <p>OK, it's not the 90s anymore. Most people really do understand the basics of operating a computer. If you're under 40, there's a good chance your grandparents use Facebook.</p>
  96. <p>So what do we do? Code is the output, but it's not really <em>what we do</em>. If we were just transcribing something into code, that's basically data entry. We're knowledge workers. We take inexact problems and create exact solutions. Imagine if managers were capable of exactly specifying the system they want built. They would have to explain it so finely-grained that it would be programming. That's what we do. We are people who exactly detail how a system works. Our code is the be-all, end-all specification for <em>what the software does</em>. We are people that write specifications, digest knowledge, and solve problems.</p>
  97. <p>Most people are incapable of breaking a problem down to the level required for computer code to solve it. This isn't to say that they can't learn, but it's a skill you must nurture. Imagine a parent (P) trying to teach a kid (K) how to make a grilled cheese sandwich:</p>
  98. <blockquote>
  99. <p>K: How do you make a grilled cheese sandwich?</p>
  100. <p>P: You make a cheese sandwich, then fry it in a pan until it's done.</p>
  101. <p>K: What's cheese?</p>
  102. <p>P: It's a food made from milk.</p>
  103. <p>K: How do they make cheese?</p>
  104. <p>P: Well, they take milk, and they add rennet, then they add flavorings, and maybe age it.</p>
  105. <p>K: What's rennet?</p>
  106. <p>P: It's an enzyme that makes the milk solid</p>
  107. <p>K: How does it do that?</p>
  108. <p>P: It is a protease enzyme that curdles the casein in milk.</p>
  109. <p>K: How does a nucleophilic residue perform a nucleophilc attack to covalently link the protease to the substrate protein before releasing the first half of the product?</p>
  110. <p>P: Because I said so.</p>
  111. </blockquote>
  112. <p>Imagine the plethora of questions they can keep asking: How do you tell if it's done? What does done mean? How many minutes? What's a minute? Why is a second a second and not something else? How brown is too brown? What kind of bread do you use? How do you make bread? What is bread yeast? What's butter? What's a pan? How do you make a pan? What's a stove? Why does a stove get hot? How does a stove get hot? What happens if you don't have cheese? What happens if you don't have bread? Can you use a microwave? Can you cook it outside if it's really hot? Can you use other cheeses?</p>
  113. <p>So when somebody in the business asks, "can you tell me how many people visited our site yesterday and clicked on the newsletter signup?", it sounds like a simple request. You just take all the people, find the ones who clicked the thing, and count it. But, let's take a dev perspective. How do we identify visitors? Is IP good enough? Do we support IPv6? Do we want to use cookies? Is our cookie policy legally compliant in Europe? Do we have to worry about COPPA? Do we want to de-dupe visitors? How do we track that people clicked on a link? What's the implication of click-stream tracking? Will our infrastructure support that? How important is accuracy? If we lose one click record, does that matter?</p>
  114. <p>This is what developers do. For every line of code we write, we are answering all of these questions in excruciating detail.</p>
  115. <p>When you hear developers talk about "abstraction," we are basically answering the "How does electricity get turned into heat?" question for anybody who asks. Then we're answering the "how does a protease enzyme curdle casein?" question. Then we're answering the "how does heat turn bread brown?" question. One of the questions we literally answer is, "How do you turn 1s and 0s into text?" Well, what about character encodings or code pages or multi-byte entities or byte-order markers or little-endianness... you get what I'm saying. A computer is a dumb machine. It can't read our minds, and has no context.</p>
  116. <p>A good developer is able to take a high-level problem, see best way to break it down, and create the correct levels of abstraction, all while keeping the code readable and maintainable for other developers. This also explains why some people are 10x performers, and some people get so frustrated with programming that they give up. Some people have curated, or have a natural talent for, thinking at this extreme level of detail. Some people can intuit things that others will never discover -- even if they had all the time in the world. This is the nature of knowledge work.</p>
  117. <h2 id="professionals">Professionals</h2>
  118. <p>This one is likely to be more controversial, but the crux of this issue is that <em>developers are treated like blue-collar workers.</em> Because so many of our beloved processes come from the world of manufacturing, it's very easy to see why developers would be though of like assembly line folks. That's why managers try to get consistent productivity. The idea is that if they can just find a way to measure developers, then developers will truly be interchangeable cogs: software would never be late again, it would always be on budget, and it would be exactly what we want. All of the theory they learned about manufacturing and assembly lines in business school would then apply to this field.</p>
  119. <p>This attitude led to the massive amounts of off-shore outsourcing, just like manufacturing. These days, we know that offshore development is very difficult to get right, the end product often contains a lot of bugs, and is often of very poor quality. Many companies are bringing off-shore projects back in house due to these issues -- or using local consulting firms like Dev9.</p>
  120. <h2 id="whataboutthosebuilders">What About Those Builders?</h2>
  121. <p>So what makes building and road work so predictable, when we can't get it right for development? The answer is relatively simple: we're not doing the same job. The labor in those fields have very little input on the decision-making processes. As we explained above, what a developer does all day is make thousands of tiny decisions. By the time these construction projects break ground, the decisions are made, the plans are already in place, there are very exact specifications, and there is little room for ambiguity or disagreement. In addition, the skills required aren't as widely variable. One person can use a pneumatic nailer <em>about</em> as good as any other. One person can operate a dump truck <em>about</em> as good as any other. And even if somebody was a 10x better paver than another, the time needed to cure is a near-constant factor. In addition, the tools and techniques are not as rapidly moving. The basics of foundations, jack studs, jamb studs, nogging, top plates, mudding, and taping really hasn't changed. Governments and building codes will dictate many of the decisions, like how far apart studs are center-to-center, or how many electrical outlets go on a wall.</p>
  122. <p>Rather than trying to build an analogy to builders, who makes all the decisions? City planners, building code authors, architects, and engineers. All while dealing with a highly beaurocratic permit system, and localities that have different rules. They make tons of decisions.</p>
  123. <h2 id="professionalism">Professionalism</h2>
  124. <p>Let's do another thought process. If developers were truly thought of as professionals, let's see how other professions compare.</p>
  125. <h3 id="doctors">Doctors</h3>
  126. <p>Ask a doctor what their job is. Is it talking to people? Is it writing prescriptions? Maybe it's taking inexact problems from imperfect people with imperfect information, then trying to diagnose and fix or ameliorate problems, within the constraints of cost, time, side effects, and a million other things. Sound familiar?</p>
  127. <p>So how do you measure the productivity of doctors? Given their high cost, obviously the field should be rabid for productivity optimization, right? Doctors have something called <a href="http://healthworkscollective.com/linda-ringquist/194581/measuring-physician-productivity-through-rvus">RBRVU, or "Resource-Based Relative Value Units."</a>. From that article:</p>
  128. <blockquote>
  129. <p>[...] if your organization is measuring physician productivity based on how many patients a doctor sees per day, it needs to take many relativities into consideration. If you compare a primary care physician with a small practice to an ED physician, you are unlikely to see a day when the PCP sees more patients than the bustling ED physician – but is that really a fair and accurate measure of productivity? However, within your organization, if you stack doctors up against those in like-practice, thinking that you can judge productivity on numbers alone, you run into the trap of complexity of care – even within the same speciality, practices may be saddled with patients in varying degrees of medical complexity – and even that will change over time within the same patient!</p>
  130. </blockquote>
  131. <p>This seems rather familiar.</p>
  132. <h2 id="lawyers">Lawyers</h2>
  133. <p>Ok, let's try lawyers. Is their job reading briefs? Is it writing them? Is it consulting with people? Or is it doing all of that, while interpreting imperfect laws with imperfect information based on second- and third-hand reports of a situation, while absorbing all of the decisions of the past?</p>
  134. <p>We all are pretty familiar with the traditional method of measuring productivity of lawyers: their billable hour counts. Even there, <a href="http://amlawdaily.typepad.com/amlawdaily/2011/04/harper040711.html">people are discounting that metric</a>. The only goal of billable hours is higher partner profits. From that article:</p>
  135. <blockquote>
  136. <p>The relevant output for an attorney shouldn't be total hours spent on tasks, but rather useful work product that meets client needs. Total elapsed time without regard to the quality of the result reveals nothing about a worker's value. More hours devoted to a task can often lead to the opposite of true productivity.
  137. Common sense says that the fourteenth hour of work can't be as valuable as the sixth. Fatigue compromises effectiveness. That's why the Department of Transportation imposes rest periods after interstate truckers' prolonged stints behind the wheel. Logic should dictate that absurdly high billable hours result in compensation penalties.</p>
  138. </blockquote>
  139. <p>Hey, there's something interesting. "Useful work product that meets the client needs." How does Scrum define success? Value delivered to the business. It says nothing of how you determine that value. There are too many factors. It may even be impossible to directly correlate revenue to features. Therefore, the only measure of success in scrum is that the product owner is happy.</p>
  140. <h2 id="developers">Developers</h2>
  141. <p>So those two fields, often considered where the best and brightest go, have found that hours and other obvious metrics aren't useful to measure productivity. So, why aren't developers treated the same way? Why do we keep being excluded from the "Professional" list?</p>
  142. <p>I'm not suggesting any solution here. I just don't have one. However, it helps explain things like calling developers <a href="http://blog.markturansky.com/archives/95">resources</a>. From that article:</p>
  143. <blockquote>
  144. <p>Does George Steinbrenner schedule a “short stop resource” or does he get Derek Jeter?</p>
  145. <p>Do they Yankees want homerun hitting A-Rod or a mere “3rd baseman resource”?</p>
  146. <p>Did the Chicago Bulls staff a “shooting guard resource” or did they need Michael Jordan?</p>
  147. <p>Did Apple do well when it had a CEO “resource” or did they achieve the incredible after Steve Jobs came back to lead the company?</p>
  148. <p>Thoughtworkers and creative types are no different. Software engineers are simultaneously creative and logical, and there is an order of magnitude difference between the best and worst programmers (go read Peopleware if you don’t believe this). Because of this difference, estimates have to change based on the “resource,” which means we’re not interchangeable cogs after all.</p>
  149. </blockquote>
  150. <p>So let's assume that measuring -- or more importantly, optimizing -- productivity is nearly impossible. How do you keep your team happy and still satisfy the business need for efficient use of capital? Well, what do these other professionals do? Instead of trying to directly measure productivity, they measure anything that impedes productivity.</p>
  151. <h2 id="measuringimpediments">Measuring Impediments</h2>
  152. <p>This is an easy one. Every time something impedes progress, make a note of what it is, and how long it took to resolve. This is especially good to do for any external dependencies. Any time the work leaves the direct, in-progress control of the developer, track who it goes to, and how long they have it.</p>
  153. <p>You can then use this information to talk with the external groups. For example, if the IT folks are taking 2 weeks to turn around a virtual machine, that's a discussion the Dev manager can have with the IT manager. If you have a policy of mandatory code reviews, then track that time. Maybe people are letting those sit around for 3 days, and the manager can set priorities. Maybe there are competing priorities. Either way, the dev manager can show THEIR boss why work items are taking longer than they need to.</p>
  154. <h2 id="timebeforedelivery">Time Before Delivery</h2>
  155. <p>This is another interesting metric. Track how long it takes from the point the business requests a work item, to when it's available for use in production. Over time, this metric will stabilize. If the units of work are somewhat consistently sized, predictability will be gained.</p>
  156. <h2 id="timeinprogress">Time In Progress</h2>
  157. <p>This one tracks the total amount of wall time taken from when work starts on an item, to when it's delivered. Again, if the units of work are approximately similar sized, predictability will be gained here.</p>
  158. <h2 id="timeinphase">Time In Phase</h2>
  159. <p>This one tracks the wall time in each phase. Remember how I told you to track external organizations? You should be tracking every phase. The design phase, the dev phase, the QA phase, the code review phase, even the deployment phase. By having every phase tracked, you can identify the slower phases, and see if there is any room for optimization.</p>
  160. <h2 id="flowcontrol">Flow Control</h2>
  161. <p>Just like working more than 40 hours leads to less productivity, so does working on too much at once. There's a rule of optimization that you can optimize a process only as much as you can optimize a stage. The way to get more done is to remove bottlenecks.</p>
  162. <p>If the QA team is only able to test 4 stories weekly, but developers are finishing 10 stories per week, then only 4 features per week are going to be released. Speeding up the developers will have no effect on the number of features delivered per week. You have to get the QA team to get more throughput. If the managers didn't know the QA team was the bottleneck before, it's impossible to ignore the pile of work that's growing in their phase.</p>
  163. <p>To this end, it makes sense that instead of developers taking a bunch of items on at once, they should focus on one item, and drive it to completion. In addition, there should be some limit of total features being worked on at one time. Work that's being done beyond what the QA team can handle is wasted work. If your developers can help resolve the roadblock in the QA queue, that's going to deliver more value to the customer than working on features. And if we forgot, value is the true output we're trying to deliver.</p>
  164. <h2 id="waitaminute">Wait a Minute...</h2>
  165. <p>If you think this all sounds a little familiar, it should. It's the basics of Kanban. It again comes from the manufacturing world, but the focus is on a continuous delivery of value to the customer, with a minimum of wasted work.</p>
  166. <p>We have plenty of articles on the Dev9 blog about Kanban, so I won't go into too much detail here. The basics of Kanban:</p>
  167. <ol>
  168. <li>Map your value stream. This means separate stages for any handoff point. This also should include any external factors that might impede progress. Then you track the time a story spends in each phase, as described above.</li>
  169. <li>Define the start and end points for the Kanban system. Some teams find if valuable to have To-Do, Doing, and Done. Some teams have Backlog, Design, Dev, Code Review, QA, Release, and Done. It's up to you. Anywhere there's a political or team boundary is a perfect place to have a new phase.</li>
  170. <li>Limit WIP (Work In Progress). As we explained above, increasing productivity of developers without clearing the downstream bottleneck results in wasted work, and no adiditional value delivered to the customer. The team shoul agree on WIP limits, and situations which might allow for breaking those imits.</li>
  171. <li>Service Classes. We know that some production issues will have to take priority. You can have different classes of service (e.g. "standard", "expedite", "fixed delivery date").</li>
  172. <li>Adjust Empirically. Given the data you're tracking above, you can find bottlenecks and inefficiencies, and work to resolve them.</li>
  173. </ol>
  174. <p>This is the current best solution we've found. Instead of trying to directly measure programmer productivity, which we showed above is practically impossible, focus on measuring anything that impedes their progress, or the progress of delivering value to the customer. </p>
  175. <h1 id="intuitions">Intuitions</h1>
  176. <p>Finally, a little note for you, which is often the antithesis to empirical measurement: trust your gut. Even though you can't just put numbers on it, most developers find it easy to spot good and bad developers. There's just <em>something</em> telling you that they're better. It could be the way they talk about their technology, the thought they put into an answer, or the answer itself. Most developers would sacrifice project and pay to work with a former favorite co-worker. Managers, if you have a developer you like and trust, then trust their input on their coworkers.</p>
  177. <p>In addition, even though they may not be developers, managers often already know who their best and worst performers are. There's usually one or two standout people, even in a team of already-amazing people. If you have all of your developers stack rank each other, it's likely the top performs and the worst performs would be quite consistent. This doesn't fix the issue of finding or hiring developers. The troubles of interviewing could be the subject of an article even longer than this one.</p>