Archive

Archive for May, 2009

Headlights and Guardrails

May 26, 2009 Comments off

In software development, we want to go as fast as possible – and no faster. The real trick is the “and no faster” part, as knowing how fast you can go without suffering either rapidly accumulating code debt or other nasty things is often the least understood part.

Why do we want to go fast in the first place? Well, if we’re not accumulating technical debt, our quality is still within the bounds we’ve set for ourselves, and we’re satisfying user stories, then we want to be able to accomplish as much as we sustainably can each sprint. This way we can deliver value faster, and people tend to like to pay for that kind of thing 🙂

Road at Night

Well, first, we need a road: the basics of an agile environment need to exist before we can go very fast at all. If we’re working on the build system every sprint and trying to get the basics of a story understood, we’re still in road construction, and we shouldn’t expect much in the way of speed until we get some of this basic asphalt laid down and smooth. This includes such basics as a good development setup for our team, a basic understanding of agile principles and a grasp of the technology stack we’re using, and the support of a continuous integration server, to mention a few. If we’re attempting to bounce along over the potholes without setting up a proper environment for rapid delivery of software value, we’ll reap what we sow.

Assuming we’ve built the road, then, what things tend to hold us back? Just like on a real road, the only things stopping us from going faster and faster (to the mechanical limit of our vehicle, or in our case, our keyboards and brains) are either externally-imposed limitations (e.g. a speed limit and cops to enforce it), or our own ability to control the pace without going off the road. In software construction, as in life, we can go off the road in many varied ways, but they all tend to be spectacular, destructive, and painful. Unlike the real road, we can be cruising along for some time before we discover we’ve left the asphalt behind and are sailing over a cliff.

We’ll start by assuming that our corporate environment has eliminated externally imposed speed limits and political roadblocks – not always a safe assumption, but lets assume for the moment that we’re one of the lucky developers who work in such a situation.

Our top-level speedometer, to overuse our analogy a bit, is our velocity, measured in features per iteration, or complexity points per iteration – in other words, how much business value are we adding per time period?

Code Metrics and Quality Measurement
Just as on the real road, though, the speedometer alone doesn’t tell the whole story. If we like the number of feature points were churning out just fine, but we’re suffering rapidly increasing code debt, defect count, or other “red lights”, we won’t be zooming along at that speed for long before something blows. Some of the other dashboard gauges we have to look at are our code coverage, our complexity metrics, our adherence to coding and other standards, and so forth. If we’re not monitoring all of these, then we’re not going to know if something is going wrong until smoke starts pouring out from under our metaphorical hood.

The most common way to go off the road is for quality to slip. This can be detected in one of a number of ways, including an ever-increasing defect rate. If most of your sprint is taken up by fixing defects or paying off code debt, then you’re probably trying to go too fast (or you’ve not finished laying the road after all). Of course, it’s possible you’ve just got a few bad drivers on your team, but we’ll assume that’s easier to see (if not necessarily easier to fix). What’s worse than seeing quality slip? Not seeing quality slip, even though it is. We can’t measure quality directly, per se, but we sure can measure a lot of other things. Once we know what the normal position of each guage is (e.g. once we establish reasonable code standards that we can measure), then we can watch them to get early warning of things going awry.

Just like on the real road, we need two categories of things, it seems, to help us go as fast as safely possible: I’ll call them headlights and guardrails.

Headlights

Headlights
Headlights are tools tools and techniques that let us see where we’re going and if we’re going where we want to be.

The most basic tools here are user stories, acceptance criteria/tests (ideally executable ones), and metrics such as defect rate and velocity measurements. None of these are trivial or straightforward, and it’s easy to think you’ve got a good view and suddenly discover you’ve been accumulating code debt without realizing it. The only proper reaction at that point is to slow down and correct the problem, as we’ll discuss below.

A good business understanding of the goals and epics behind our user stories gives us more range to see further ahead, and going fast requires looking further ahead, while at the same time paying attention to where you are at the moment.

Just like when driving we must be aware of the road immediately ahead, our user stories give us the close-focus we need to be doing the immediately useful thing. We can’t discard these in favor of looking further ahead exclusively, or we’ll never get to where we want, but we can combine that with an awareness of both the near future and an understanding of the overall destination to make better decisions in our day-to-day work.

If we concentrate exclusively on the user stories in hand for each iteration we can find we’ve lost sight of the forest, and may have a hard time fitting together features that should blend into an overall product. If we concentrate only on the distant horizon and not on the user story we’re working on we’ll never get anything done. The proper balance lets us go fast.

We don’t want to be like the driver in the joke with the punch line that ends “we’re lost… but we’re making bloody good time”!

Looking a bit further ahead also allows us to anticipate curves and obstacles in the road, and be ready to hand them when they arrive. If we know, for instance, from our long-range planning that we intend to scale our application to thousands of users, we might make different decisions than if we’re aware that a single user on a desktop box is the intended audience – even though neither of these factors is really represented directly by each user story we work on.

Executable acceptance tests from a tool like Greenpepper, Fitness, RSpec, or the like can be valuable headlights, freeing developer time from the repetitive manual verification and allowing BA/Customer Proxies to have control over the acceptance process – again freeing up developers to develop, and maximizing team velocity. As was mentioned in a recent stand-up meeting: if you’ve manually tested once, you’ve probably already spent more time than it takes to set up an automated test to do the same thing repeatedly, not to mention you’ve probably enjoyed it a lot less 🙂

Guardrails

Guardrails
Guardrails are things that give us some path to follow, and, if necessary, make an ugly noise if we stray too far from that path. They include our basic test suite (and the ugly noise of a build breaking when we hit the rail), as well as coverage and other analysis tools.

There’s a big difference between guardrails and a stone wall built across the road ahead, however – it’s not hard to let a testing tool or technique turn into a straightjacket, with tons of brittle and hard-to-maintain tests that don’t help us at all. We need the right tool for the right job, and used the right way.

If we have guardrails ensuring the basics of our code quality, we can go faster with the confidence that when we look back at the end of each sprint we will not have accumulated more code debt that needs to be paid back later. For example: if we establish a test coverage metric that ensures we have a breaking build if our code coverage goes below a certain minimum level (I propose this always be 100%, but that’s another post), we can move forward with the assurance that there’s no code that’s being left untested, so we won’t find ourselves in the distinctly non-TDD-like position of having to go back and write tests for existing code, burning time that should be able to be used for the next story.

We can also refactor with better confidence if we know for a fact there are tests watching over our shoulders, ready to break should our refactor not be true. Refactoring code that is, at least in part, untested should always be an unacceptable risk.

If we have some checkstyle, PMD, FindBugs or other static analysis tools checking that our cyclomatic complexity is within bounds, that our class size and line length are readable, and other critical maintainability and coding standards factors are met, we can plunge forward without the fear of a huge cleanup being required just to make the code understandable down the road a ways.

Of course, just like guardrails and headlights are not infallible in the real world, all the tools and checks in the world don’t ensure good quality code. One area that’s particular hard to ensure quality within via automatic mechanisms is design. You can have code that’s 100% covered, passes every checkstyle rule known to man, and still represents a terrible design. This is where the human factor comes into play – the automation merely ensures that you’re spending valuable human attention span on the stuff that really requires a brain, as opposed to things that can be verified mechanically.

Discipline is the glue that makes all of this work together – often times developers themselves will have the “smell” of something done not quite right, but not feel like they’ve got the latitude to dig into it and clean it up, so they save it until the mythical “later”, which sometimes never comes. Management and team leads must also be disciplined enough to have the patience while that kind of refactor happens – with the firm knowledge that they’ll get paid back by better productity and a lower defect rate over the mid to long term.

A final warning: It’s easy to let headlights become leashes, and guardrails can become cubicle walls. Many agile practitioners are concerned, and rightly so, that adding tools and techniques can turn into a new dogmatism and inflexible methodologies. It’s up to us in the trenches to make sure we don’t let this happen, while at the same time getting all the juice we can out of helpful techniques and tools.

Properly applied, though, headlights and guardrails can be valuable tools in letting us reach our maximum velocity, while still arriving safely at our destination.

By Mike Nash

Categories: Point2 - Technical

Running Fedora 10 using VirtualBox on MacBook Pro

May 19, 2009 1 comment

VirtualBox is a freely available open source x86 virtualization product released under the GNU General Public License (GPL).  I was directed to this product by a colleague of mine while trying to find a way to get a Linux bash shell running on my MacBook Pro for Linux Certification training purposes.  There are a ton of great images available covering GNU/Linux, GNU OpenSolaris, BSD and others.  Here is my experience getting Fedora 10 running in VirtualBox on my MacBook Pro:

Note: instructions provided are for VirtualBox 2.2.2 and OS X

1. Download and Install VirtualBox for OS X
2. Download the Fedora image
3. Uncompress the Fedora 7Z somewhere you want to store it (you may need p7Zip, EZ 7Z or something else to uncompress)
4. Launch VirtualBox
5. Click New
6. Click Next button
7. Enter name ‘Fedora10-x86’ (can be anything you want)
8. Operating System: Linux
9. Version: Fedora
10. Click Next
11. Memory: I recommend 512MB+ (you will get much faster boot time with 1024MB)
12. Click Next
13. Use existing hard disk
14. Click the little folder icon
15. Click Add
16. Browse for the VDI file you extracted in step 3
17. Click Open
18. Click Select
19. Click Next
20. Click Finish
21. Select Fedora10-x86 and click Start
22. Login with the credentials you saw from the linked page in step 2
23. ENJOY!

Logan Peters

Fedora 10 running in VirtualBox on Macbook Pro

Scala and Java Together

May 18, 2009 1 comment

Given a need to work towards more scalable systems, and to increase concurrency, many developers working in Java wonder what’s on the other side of the fence.

Languages such as Erlang and Haskell bring the functional programming style to the fore, as they essentially force you to think and write functionally – they simply don’t support (well, at least not easily) doing things in a non-functional fashion very easily.

The mental grinding of gears, however, can be considerable for developers coming from the Java (or C/C++/C#) worlds in many cases, who have become familiar with the Object-oriented paradigm. An excellent solution might be to consider the best of both worlds: Scala.

I had the opportunity this weekend to renew my acquaintance with Scala, and to work on blending it with the old mainstay, Java. Scala runs on the JVM, so as you might expect, it can be combined pretty readily with Java – more cleanly than before, given some new Maven and IDE plugins.

Details and screen shots here…

By: Mike Nash

Mocks, Stubs and Spies! Oh my!

May 17, 2009 Comments off

I’ve been pretty involved in helping out new developers at Point2.  I try to ease them in to our agile, Scrum/XP environment easily, but there are usually a few roadblocks.  So far, the most troublesome obstacle has been the use of Test Doubles.  To try and mitigate this a bit I’ve written a 3-part series on this topic.

Part I is on stubs.

Part II is about mocks.

Part III covers spies.

Enjoy!

By: Kevin Baribeau

Why Did That Happen?

May 15, 2009 1 comment

During our Sprints it is not uncommon for some problem event to occur.  Such events could be anything from a story taking considerably longer to complete than it was estimated to take, to a production bug being injected into a Sprint.  As a team we would usually acknowledge these events in our retrospective and make a few brief comments about how to eliminate similar occurrences in the future.  It seemed however as though we would promptly forget about the problem…until it happened again.

After seeing these problem events continue to occur, my team started throwing around the idea of doing root cause analysis. The hope was to get to the bottom of why these things were happening in the first place.

At first we were not quite sure how to conduct a formal root cause analysis, but after a little research we got ourselves pointed in the right direction. Our first analysis was a great exercise in drilling down into the heart of something the team saw as a problem. By focusing on one specific problem we were able to:

  • identify a plethora of individual areas that led to the problem in question.
  • isolate causes that could immediately be acted upon.
  • identify causes that we as a team could not solve alone.
  • bring causes into the foreground spurring discussion, and setting the team up to be mindful of them in the future.

As a team we have seen the benefit of analyzing our problem events and have integrated root cause analysis into our Sprint cycle. Just like we have a retrospective at the end of the Sprint we also have a root cause analysis session. One team member is responsible for presenting a problem event to the team and leading the analysis. To date we’ve done this three times and have used two different methods of analysis (Ishikawa Diagram, Cause Mapping). The way the analysis is run is completely at the session leader’s discretion.

We’ve already started to deal with areas that were immediately actionable, and have at least started to get the ball rolling in some areas that require a little more thought and organization. It is no doubt in my mind that as my team continues to identify and eliminate the root cause of our most crippling problems we will reach that hyper-productive state that so many Scrum teams strive for.

By Hemant J. Naidu

Failing Should Be Easy

May 14, 2009 2 comments

failSounds kind of crazy doesn’t it?  Why would you want failing to be easy?  There’s actually a pretty simple explanation.

http://agileshoptalk.wordpress.com/2009/05/14/failing-should-be-easy/

By: Chris Dagenais

Technical Speed Bumps

May 5, 2009 1 comment

Speed Bump

As developers, after working on the same project for a while, we start to remember sections of our code base which aren’t quite ‘up-to-par’ with either team or industry standards. We begin dreading working in those areas of the project and sometimes even avoid stories or tasks involving these problem areas. Our first instincts should be to refactor the area or clean it up in some way so that these smells do not haunt the next person who must work in that area of the system. Unfortunately, it is not always as simple as just fixing the issue immediately and soon a small task can become bloated when we head down those rabbit-holes of maintenance.

Our initial solution to this problem was to simply document these code smells on a white board beside our scrum board under a section called ‘Code Smells’. Each smell was represented by a card which could then be prioritized based on the impact it was causing. This was great because all of these problems in the back of everyone’s mind were now right there in our faces and even the business could see them. It was now more apparent what might cause you issues when you worked on a given story. These code smells were still addressed as part of stories if it made sense to fix them immediately but we could also do some work or investigation on the smells during short periods of spare time, often before lunch or at the end of the day.

This method did work for quite some time; however it was soon apparent that it was not just bad code which was slowing us down. Other issues arose such as deployment complexity, test runtimes, etc. which did not seem appropriate to classify as code smells even though they were undoubtedly hurting our team’s velocity. We came up with the idea of renaming our Code Smells section to ‘Speed Bumps’. This new name gave us a place to track anything and everything which was slowing us down. They are still addressed in the same fashion, either as part of a story or during spare time. The benefit of the rename simply gave the team more certainty when adding issues to the list since if there was something limiting their productivity; they were encouraged to either deal with it immediately or add it to the list. This makes any technical debt in the project more manageable because everyone becomes aware of them and they can be prioritized once recognized.

I am curious if any other development teams out there utilize a similar or even completely different strategy for addressing technical debt. What works for you?

By: Jesse Webb