Headlights and Guardrails
In software development, we want to go as fast as possible – and no faster. The real trick is the “and no faster” part, as knowing how fast you can go without suffering either rapidly accumulating code debt or other nasty things is often the least understood part.
Why do we want to go fast in the first place? Well, if we’re not accumulating technical debt, our quality is still within the bounds we’ve set for ourselves, and we’re satisfying user stories, then we want to be able to accomplish as much as we sustainably can each sprint. This way we can deliver value faster, and people tend to like to pay for that kind of thing 🙂
Well, first, we need a road: the basics of an agile environment need to exist before we can go very fast at all. If we’re working on the build system every sprint and trying to get the basics of a story understood, we’re still in road construction, and we shouldn’t expect much in the way of speed until we get some of this basic asphalt laid down and smooth. This includes such basics as a good development setup for our team, a basic understanding of agile principles and a grasp of the technology stack we’re using, and the support of a continuous integration server, to mention a few. If we’re attempting to bounce along over the potholes without setting up a proper environment for rapid delivery of software value, we’ll reap what we sow.
Assuming we’ve built the road, then, what things tend to hold us back? Just like on a real road, the only things stopping us from going faster and faster (to the mechanical limit of our vehicle, or in our case, our keyboards and brains) are either externally-imposed limitations (e.g. a speed limit and cops to enforce it), or our own ability to control the pace without going off the road. In software construction, as in life, we can go off the road in many varied ways, but they all tend to be spectacular, destructive, and painful. Unlike the real road, we can be cruising along for some time before we discover we’ve left the asphalt behind and are sailing over a cliff.
We’ll start by assuming that our corporate environment has eliminated externally imposed speed limits and political roadblocks – not always a safe assumption, but lets assume for the moment that we’re one of the lucky developers who work in such a situation.
Our top-level speedometer, to overuse our analogy a bit, is our velocity, measured in features per iteration, or complexity points per iteration – in other words, how much business value are we adding per time period?
Just as on the real road, though, the speedometer alone doesn’t tell the whole story. If we like the number of feature points were churning out just fine, but we’re suffering rapidly increasing code debt, defect count, or other “red lights”, we won’t be zooming along at that speed for long before something blows. Some of the other dashboard gauges we have to look at are our code coverage, our complexity metrics, our adherence to coding and other standards, and so forth. If we’re not monitoring all of these, then we’re not going to know if something is going wrong until smoke starts pouring out from under our metaphorical hood.
The most common way to go off the road is for quality to slip. This can be detected in one of a number of ways, including an ever-increasing defect rate. If most of your sprint is taken up by fixing defects or paying off code debt, then you’re probably trying to go too fast (or you’ve not finished laying the road after all). Of course, it’s possible you’ve just got a few bad drivers on your team, but we’ll assume that’s easier to see (if not necessarily easier to fix). What’s worse than seeing quality slip? Not seeing quality slip, even though it is. We can’t measure quality directly, per se, but we sure can measure a lot of other things. Once we know what the normal position of each guage is (e.g. once we establish reasonable code standards that we can measure), then we can watch them to get early warning of things going awry.
Just like on the real road, we need two categories of things, it seems, to help us go as fast as safely possible: I’ll call them headlights and guardrails.
Headlights are tools tools and techniques that let us see where we’re going and if we’re going where we want to be.
The most basic tools here are user stories, acceptance criteria/tests (ideally executable ones), and metrics such as defect rate and velocity measurements. None of these are trivial or straightforward, and it’s easy to think you’ve got a good view and suddenly discover you’ve been accumulating code debt without realizing it. The only proper reaction at that point is to slow down and correct the problem, as we’ll discuss below.
A good business understanding of the goals and epics behind our user stories gives us more range to see further ahead, and going fast requires looking further ahead, while at the same time paying attention to where you are at the moment.
Just like when driving we must be aware of the road immediately ahead, our user stories give us the close-focus we need to be doing the immediately useful thing. We can’t discard these in favor of looking further ahead exclusively, or we’ll never get to where we want, but we can combine that with an awareness of both the near future and an understanding of the overall destination to make better decisions in our day-to-day work.
If we concentrate exclusively on the user stories in hand for each iteration we can find we’ve lost sight of the forest, and may have a hard time fitting together features that should blend into an overall product. If we concentrate only on the distant horizon and not on the user story we’re working on we’ll never get anything done. The proper balance lets us go fast.
We don’t want to be like the driver in the joke with the punch line that ends “we’re lost… but we’re making bloody good time”!
Looking a bit further ahead also allows us to anticipate curves and obstacles in the road, and be ready to hand them when they arrive. If we know, for instance, from our long-range planning that we intend to scale our application to thousands of users, we might make different decisions than if we’re aware that a single user on a desktop box is the intended audience – even though neither of these factors is really represented directly by each user story we work on.
Executable acceptance tests from a tool like Greenpepper, Fitness, RSpec, or the like can be valuable headlights, freeing developer time from the repetitive manual verification and allowing BA/Customer Proxies to have control over the acceptance process – again freeing up developers to develop, and maximizing team velocity. As was mentioned in a recent stand-up meeting: if you’ve manually tested once, you’ve probably already spent more time than it takes to set up an automated test to do the same thing repeatedly, not to mention you’ve probably enjoyed it a lot less 🙂
Guardrails are things that give us some path to follow, and, if necessary, make an ugly noise if we stray too far from that path. They include our basic test suite (and the ugly noise of a build breaking when we hit the rail), as well as coverage and other analysis tools.
There’s a big difference between guardrails and a stone wall built across the road ahead, however – it’s not hard to let a testing tool or technique turn into a straightjacket, with tons of brittle and hard-to-maintain tests that don’t help us at all. We need the right tool for the right job, and used the right way.
If we have guardrails ensuring the basics of our code quality, we can go faster with the confidence that when we look back at the end of each sprint we will not have accumulated more code debt that needs to be paid back later. For example: if we establish a test coverage metric that ensures we have a breaking build if our code coverage goes below a certain minimum level (I propose this always be 100%, but that’s another post), we can move forward with the assurance that there’s no code that’s being left untested, so we won’t find ourselves in the distinctly non-TDD-like position of having to go back and write tests for existing code, burning time that should be able to be used for the next story.
We can also refactor with better confidence if we know for a fact there are tests watching over our shoulders, ready to break should our refactor not be true. Refactoring code that is, at least in part, untested should always be an unacceptable risk.
If we have some checkstyle, PMD, FindBugs or other static analysis tools checking that our cyclomatic complexity is within bounds, that our class size and line length are readable, and other critical maintainability and coding standards factors are met, we can plunge forward without the fear of a huge cleanup being required just to make the code understandable down the road a ways.
Of course, just like guardrails and headlights are not infallible in the real world, all the tools and checks in the world don’t ensure good quality code. One area that’s particular hard to ensure quality within via automatic mechanisms is design. You can have code that’s 100% covered, passes every checkstyle rule known to man, and still represents a terrible design. This is where the human factor comes into play – the automation merely ensures that you’re spending valuable human attention span on the stuff that really requires a brain, as opposed to things that can be verified mechanically.
Discipline is the glue that makes all of this work together – often times developers themselves will have the “smell” of something done not quite right, but not feel like they’ve got the latitude to dig into it and clean it up, so they save it until the mythical “later”, which sometimes never comes. Management and team leads must also be disciplined enough to have the patience while that kind of refactor happens – with the firm knowledge that they’ll get paid back by better productity and a lower defect rate over the mid to long term.
A final warning: It’s easy to let headlights become leashes, and guardrails can become cubicle walls. Many agile practitioners are concerned, and rightly so, that adding tools and techniques can turn into a new dogmatism and inflexible methodologies. It’s up to us in the trenches to make sure we don’t let this happen, while at the same time getting all the juice we can out of helpful techniques and tools.
Properly applied, though, headlights and guardrails can be valuable tools in letting us reach our maximum velocity, while still arriving safely at our destination.
By Mike Nash