Knowledge Work is Different, Part 2
In the last post, we talked about how leadership/guidance/management (LGM) methods have lagged behind the pace of technology and social change, as we’ve transitioned from a manufacturing economy to a knowledge and service economy. The LGM methods that are ‘best practices’ in traditional/manufacturing/complicated environments were the starting point for knowledge work, but the System Development Lifecycle and Waterfall methods have resulted in failure rates of approximately 70%, at tremendous cost. Out of frustration and desperation, new schools of thought about LGM methods more appropriate to knowledge work have sprung up, of which Agile is arguably the most common today. In this post, we’ll explore the importance of measurements, and how choosing metrics from the ‘complicated’ toolbox, instead of the ‘complex’ toolbox can negatively affect our outcomes.
One of the first tidbits I got from working with the American Society for Quality Control (ASQC) was ‘be careful what you measure, because it will improve.’ (I think that’s a paraphrase of W. Deming) The satori moment came when the management edict came down that henceforth, our (programmers) efforts would be evaluated strictly based on our production of lines of compiled source code. In a matter of days, the lunch conversations went from ‘I can code that algorithm in 10 lines, not 12!’, to ‘I can code that algorithm in 100 lines, not 12!. Productivity, as measured by lines of source, zoomed. Productivity, as measured by anything useful (function points, bugs fixed, readability or maintainability, etc.) tanked. Blessedly, that metric didn’t even last a quarter, so the damage was limited.
As an engineering manager, I was surprised to be asked by our VP one day about what we could do to improve productivity, because we’d been hitting our release schedules and quality goals pretty consistently. He explained that lots of team members, including me, seemed to often be sitting in conference rooms drawing, or in groups of 2 or 3 at someones desk, talking, or even just sitting and staring into space – when what he really wanted to see was more coding (typing, or ‘lines of source code’ again!). I remember trying to explain it with an analogy to police work. When police solve a crime, you can tell because they type it all up in a report, so Police management might be tempted to evaluate the detectives based on how many reports they generate. Those reports are the easiest part of the job, though. It starts with the crime scene, witnesses, investigation, interviews, observation and theories and lots more – the typing at the end is not an accurate measurement of the work and thought that went into solving the crime, it’s just the tangible bit. In our engineering team, the source code was the tangible bit that resulted from all that thinking, communicating and collaborating.
These are just two examples of how using the wrong metrics can negatively impact productivity in knowledge work environments.
Much of our frustration with inappropriate metrics has to do with variably defined (or undefined) terms, and our attempts to accurately predict the future. Predicting the future is hard enough with deterministic environments like sporting events (joke), but ill-advised when it comes to ‘Develop an innovative, ground-breaking SaaS offering for our industry, using new technologies and methods, and incorporating our years of knowledge, data, and infrastructure investment.’ When we say it that simply, it seems mad – and yet the VP, or CEO, or investor asks ‘When will it be done?’ and we try to craft an answer. The variably defined terms include things like ‘on-time’, ‘good quality’, ‘feature-complete’, ‘scale-able’, ‘high performing’, and ‘intuitive’. We’re not used to having to define these terms, because in our complicated manufacturing environment, there’s no judgement involved. Each widget is assembled according to the documentation, and tested for completeness and quality, as defined in the specification, and either passes or fails. ‘Intuitive’ never enters the equation.
In the complex/knowledge economy, good metrics focus on results. In the complicated/manufacturing economy, we can use metrics that include effort, incremental progress and partial results, because all of those things occur regularly and predictably. If we’ve completed 70 sub-assemblies, and we need 100 for this period, we can estimate with fine accuracy when the 100th unit will be produced because there are no unknowns between here and there. On the other hand, if we’ve completed 70% of the design phase for our year-long development effort, the chances of accurately estimating the duration of the remaining 30% are remote (awful, terrible, crazy – better to go to Vegas!). Instead, for example, in Scrum (one of the best known toolsets for knowledge work) we explicitly agree on a ‘definition of done’, so the endpoint is clearly communicated. Even with a proper agreed definition, the correct answer to ‘When will it be done?’ is ‘When its done.’
Accepting this change in metrics is a key to using more effective LGM methods at the team and company level. Instead of Gantt charts of elaborate guesses, we focus on taking specific actions for specific purposes, either to learn (assess the viability of X technology path), to produce (add feature Z to the product), to refine (do A/B testing of the interface alternatives), or improve (fix bugs, reduce technical debt). We take these specific actions on a rapid cadence, usually only 1-3 weeks at a time, as opposed to much longer traditional time scales.
Further, we can now have intelligent conversations about change/variability _during_ the product development process. In complicated/manufacturing environments, change is anathema, and is always avoided, deferred, or at least minimized. In knowledge work, this no longer makes sense. Consider the example of Feature M – originally estimated to take 5 days to develop, and desired by an estimated 50% of potential customers. During the development effort the team has learned from experiments and surveys, and now reckons that Feature M will take 20 days to develop, while only be desired by 5% of potential customers. Is there a business case for deferring Feature M to a future release, given that it’s value/cost calculation has changed by 4000 percent? How about Feature P, that was not included in the original scope, but since being released by a competitor, has become part of the minimum requirements to be competitive in this space? Is there a business case for rapidly assessing the cost of including Feature P, rather than waiting for the Bi-Annual Change Review Board to be summoned? In short, yes, there are business cases to be made for accommodating or capitalizing on change, and our knowledge work LGM methods need to take them in to account!
In summary, choosing appropriate LGM methods that are suited for knowledge work can help us to better control costs and risks, to adapt to change and to new learning, and to capitalize on opportunities of the moment. We can use communication and feedback loops to quickly assess efficacy, and create learning opportunities, instead of taking the whole wagon train down a box canyon. Acting with agility not only improves our product economics, but can also positively influence our organizational health.
====================
In the next post, Why Knowledge Work is Different, Part 3, we’ll address an old school tool that is exquisitely useful in knowledge work, and yet is almost always overlooked.