Something we’ve be trying to pay more attention to with our newest green field development projects is the running time of our unit test suites. One of the projects was running ~200 unit tests in 2 seconds. As development continued and the test case number grew, it started taking 10 seconds, then over 30 seconds. Something wasn’t right.
First challenge was to determine which were the slow running tests. A little googling found this useful patch to the python code base. Since we are using python 2.5 and virtual environments we decided to simply monkey patch it. This makes verbosity level 2 spit out the run time for each test. We then went one step further and made the following code change to _TextTestResult.addSuccess:
def addSuccess(self, test): TestResult.addSuccess(self, test) if self.runTime > 0.1: self.stream.writeln("\nWarning: %s runs slow [%.3fs]" % (self.getDescription(test), self.runTime)) if self.showAll: self.stream.writeln("[%.3fs] ok" % (self.runTime)) elif self.dots: self.stream.write('.')
With it now easy to tell which were our slow tests we set out to make them all fast again. As expected the majority of the cases were an external service not being mocked correctly. Most of these were easily solved. But there were a few tests where we couldn’t find what hadn’t been mocked. Adding a few timing statements within these tests revealed the culprit. The Django frameworks assertRedirects method.
def assertRedirects(self, response, expected_url, status_code=302, target_status_code=200, host=None): """Asserts that a response redirected to a specific URL, and that the redirect URL can be loaded. Note that assertRedirects won't work for external links since it uses TestClient to do a request. """ if hasattr(response, 'redirect_chain'): # The request was a followed redirect self.failUnless(len(response.redirect_chain) > 0, ("Response didn't redirect as expected: Response code was %d" " (expected %d)" % (response.status_code, status_code))) self.assertEqual(response.redirect_chain, status_code, ("Initial response didn't redirect as expected: Response code was %d" " (expected %d)" % (response.redirect_chain, status_code))) url, status_code = response.redirect_chain[-1] self.assertEqual(response.status_code, target_status_code, ("Response didn't redirect as expected: Final Response code was %d" " (expected %d)" % (response.status_code, target_status_code))) else: # Not a followed redirect self.assertEqual(response.status_code, status_code, ("Response didn't redirect as expected: Response code was %d" " (expected %d)" % (response.status_code, status_code))) url = response['Location'] scheme, netloc, path, query, fragment = urlsplit(url) redirect_response = response.client.get(path, QueryDict(query)) # Get the redirection page, using the same client that was used # to obtain the original response. self.assertEqual(redirect_response.status_code, target_status_code, ("Couldn't retrieve redirection page '%s': response code was %d" " (expected %d)") % (path, redirect_response.status_code, target_status_code)) e_scheme, e_netloc, e_path, e_query, e_fragment = urlsplit(expected_url) if not (e_scheme or e_netloc): expected_url = urlunsplit(('http', host or 'testserver', e_path, e_query, e_fragment)) self.assertEqual(url, expected_url, "Response redirected to '%s', expected '%s'" % (url, expected_url))
You’ll notice that if your get request uses the follow=False option you’ll end up at line 34 in this code snippet which will kindly check to make sure the page you are redirecting to returns a 200. Which is great, unless you don’t have the correct mocks for that page setup too. Mocking out the content for a page you aren’t actually testing also didn’t seem quite right. We didn’t care about the other page loading, it had it’s own test cases. We just wanted to make sure the page under test was redirecting to where we expected. Simple solution, write our own assertRedirects method.
def assertRedirectsNoFollow(self, response, expected_url): self.assertEqual(response._headers['location'], ('Location', settings.TESTSERVER + expected_url)) self.assertEqual(response.status_code, 302)
Back to a 2 second unit test run time and all is right with the world again.
In August I was fortunate enough to attend the Agile 2009 conference. I saw two excellent sessions presented by Pollyanna Pixton. I later found out that four other sessions (attended by Tefon) included recommendations of the book Stand Back and Deliver, authored by Pollyanna and 3 of her colleagues. These presenters were so convincing Tefon even purchased the book while still at the conference. It was a good read, but the chapter on the “Purpose Alignment Model” was by far my favorite.
The Purpose Alignment Model is a simple tool designed to do exactly that – facilitate an alignment of purposes. The underlying premise is all product features or development that a company performs can be placed on two scales: how critical and how differentiating they are to the operation or success of the company. With everything placed on these scales, they can then be combined into a graph.
This simple graph can make discussion about any number of product features or work processes significantly easier and more effective. When discussion about a feature starts by first placing it on the graph above, it is placed sharply into perspective.
What constitutes a mission critical differentiating feature? One simple method for determining this is to ask ‘Could this feature be the center of an advertising campaign for the product?’ These features are the driving force to a product’s adoption and define the identity of the company. If every feature gets a ‘yes’ answer to this question the model is being used wrong. Trying to differentiate with every feature will result in little more than a mediocre product. A select few features should be placed in the ‘Invest’ quadrant and these are the features that, naturally, should receive heavy investment. If these features aren’t the most polished and solid in the product, get to work.
If an honest assessment of everything about a product is completed with this model, the majority of what might initially have been placed in the investment quadrant will end up falling into the parity quadrant. The parity quadrant is interesting. Most of a product’s features will invariably not be differentiating, so why waste resources trying to develop them? These features are mission critical, make no mistake about it, they have to work and they have to work well. Parity features will drive the sticking power of an application, thus it is still critical to get them right. The key point, however, is that parity features only need to reach parity. No wheel reinventing required. Parity features aren’t innovative. Look at what the market is doing. What is the accepted best practice? Do that and nothing more.
If a feature is differentiating but not critical to the success of the business, find a partner. Don’t waste resources that could be spent getting parity features to parity and investing in mission critical differentiating features. Find a partner who is investing in the feature and use them.
If a feature is not mission critical and not differentiating, who cares? If a feature in this quadrant is being worked on, ask why.
If every feature to be worked on is placed properly relative to each other on this graph the result is an amazingly powerful communication tool. Suddenly everyone from the CEO to the developer implementing the feature can easily be on the exact same page. With an aligned purpose across the whole team, from executive to business to development to IT, more time is spent making a profit and less time debating about how to make a profit.
By: Dustin Bartlett
Earlier this year my role with Point2 changed, requiring me to switch from a desktop to a laptop. I’ve never owned a laptop before and have never used one for extended periods of time before. The switch to a MacBook Pro presented an ergonomic challenge. At Point2 there are a hodgepodge of styles for working with a laptop. They range from simply using the laptop, to about every combination possible of external mouse/keyboard/monitor. The resulting mess gets placed on as many (or more) MacGyver contraptions for getting it all organized on your desk. Although a few of these look cool, they either don’t work well or take up a lot of space on your desk.
So the obvious, simple solution would have been to just order something from Ergotron. Most of their stuff looks quite functional while making an art deco terminator endoskeleton fashion statement. Small problem though, the one I’d want costs $300. Less obvious solution: fire up Google Sketchup and start designing the smallest form factor, relatively simple to build, laptop stand I could.
Two hours with Sketchup resulted in the design you see above. It will work best with a 15″ MacBook Pro and a Dell 2009W monitor. I showed the design around, and ended up with requests for 9 to be built in total. Several months of sporadic work later, Emily and I spent a total of 45 hours combined to get them built, painted and delivered.
A few miscellaneous thoughts:
- Here is the design: Laptop Stand.zip.
- I highly recommend the use of the CutList 4.0 plugin for Sketchup to either export a CSV or directly generate a cut sheet.
- All the tapered cutting in this design is also infinitely easier to deal with using a good guide system and a circular saw rather than a table saw. If you really look into Eurekazone’s products with an open mind you might even decide, like I have, that a table saw is an obsolete tool.
- The Kreg K3 Master System is also an exceptionally well designed and effective tool. I wish all my tools worked as well as this one does.
- If you are wondering where I got the legs for the stands, go here.
- Raw material wise, the stands cost about $30 to build.
So far I’ve had 3 requests for more to be built. Not sure what will be required to get more built, now that I have one on my desk.
By: Dustin Bartlett