Archive

Posts Tagged ‘TDD’

Django’s assertRedirects little gotcha

April 23, 2010 1 comment

Something we’ve be trying to pay more attention to with our newest green field development projects is the running time of our unit test suites.  One of the projects was running ~200 unit tests in 2 seconds.  As development continued and the test case number grew, it started taking 10 seconds, then over 30 seconds. Something wasn’t right.

First challenge was to determine which were the slow running tests.  A little googling found this useful patch to the python code base.  Since we are using python 2.5 and virtual environments we decided to simply monkey patch it.  This makes verbosity level 2 spit out the run time for each test.  We then went one step further and made the following code change to _TextTestResult.addSuccess:

    def addSuccess(self, test):
        TestResult.addSuccess(self, test)
        if self.runTime > 0.1:
            self.stream.writeln("\nWarning: %s runs slow [%.3fs]" % (self.getDescription(test), self.runTime))
        if self.showAll:
            self.stream.writeln("[%.3fs] ok" % (self.runTime))
        elif self.dots:
            self.stream.write('.')

With it now easy to tell which were our slow tests we set out to make them all fast again. As expected the majority of the cases were an external service not being mocked correctly. Most of these were easily solved. But there were a few tests where we couldn’t find what hadn’t been mocked. Adding a few timing statements within these tests revealed the culprit. The Django frameworks assertRedirects method.

    def assertRedirects(self, response, expected_url, status_code=302,
                        target_status_code=200, host=None):
        """Asserts that a response redirected to a specific URL, and that the
        redirect URL can be loaded.

        Note that assertRedirects won't work for external links since it uses
        TestClient to do a request.
        """
        if hasattr(response, 'redirect_chain'):
            # The request was a followed redirect
            self.failUnless(len(response.redirect_chain) > 0,
                ("Response didn't redirect as expected: Response code was %d"
                " (expected %d)" % (response.status_code, status_code)))

            self.assertEqual(response.redirect_chain[0][1], status_code,
                ("Initial response didn't redirect as expected: Response code was %d"
                 " (expected %d)" % (response.redirect_chain[0][1], status_code)))

            url, status_code = response.redirect_chain[-1]

            self.assertEqual(response.status_code, target_status_code,
                ("Response didn't redirect as expected: Final Response code was %d"
                " (expected %d)" % (response.status_code, target_status_code)))

        else:
            # Not a followed redirect
            self.assertEqual(response.status_code, status_code,
                ("Response didn't redirect as expected: Response code was %d"
                 " (expected %d)" % (response.status_code, status_code)))

            url = response['Location']
            scheme, netloc, path, query, fragment = urlsplit(url)

            redirect_response = response.client.get(path, QueryDict(query))

            # Get the redirection page, using the same client that was used
            # to obtain the original response.
            self.assertEqual(redirect_response.status_code, target_status_code,
                ("Couldn't retrieve redirection page '%s': response code was %d"
                 " (expected %d)") %
                     (path, redirect_response.status_code, target_status_code))

        e_scheme, e_netloc, e_path, e_query, e_fragment = urlsplit(expected_url)
        if not (e_scheme or e_netloc):
            expected_url = urlunsplit(('http', host or 'testserver', e_path,
                e_query, e_fragment))

        self.assertEqual(url, expected_url,
            "Response redirected to '%s', expected '%s'" % (url, expected_url))

You’ll notice that if your get request uses the follow=False option you’ll end up at line 34 in this code snippet which will kindly check to make sure the page you are redirecting to returns a 200. Which is great, unless you don’t have the correct mocks for that page setup too. Mocking out the content for a page you aren’t actually testing also didn’t seem quite right. We didn’t care about the other page loading, it had it’s own test cases. We just wanted to make sure the page under test was redirecting to where we expected. Simple solution, write our own assertRedirects method.

def assertRedirectsNoFollow(self, response, expected_url):
    self.assertEqual(response._headers['location'], ('Location', settings.TESTSERVER + expected_url))
    self.assertEqual(response.status_code, 302)

Back to a 2 second unit test run time and all is right with the world again.

Dustin Bartlett

An easier way to test logging code

March 5, 2010 1 comment

This article describes a simple technique a colleague and I stumbled upon to make unit testing logging easier in Java programs.

Why is it usually hard to unit test logging?

Static state is more or less the enemy of testability. Unfortunately, almost every Java logging example ever published (regardless of logging framework) stores the logging provider as static state:

public class MyClass {
    private static Logger log =
        Logger.getLogger("com.point2.MyClass");
    public void foo() {
        log.info("foo!");
    }
}

In order to verify that calling the method “foo” actually emits a log message, we have to somehow intercept the call to Logger.getLogger(), and:

  1. Register some sort of hook so that we can know when something was logged
  2. Prevent the regular log output from happening. This is especially important when exceptions are logged, as having a lot of test stack traces being emitted as part of a test run is annoying, confusing, and slow.

The technique:

Do away with the static call, and use dependency injection instead.

public class MyClass {
    public MyClass(Logger log) {
        this.log = log;
    }

    private final Logger log;

    public void foo() {
        log.info("foo!");
    }
}

Now any unit test that needs to verify logging behaviour can just inject a mock object. All commonly used java logging frameworks uses classes instead of interfaces, but modern mocking frameworks can mock classes without too much difficulty.

Now, make the static call within your dependency injection framework. Here’s a spring example:

<bean id="log4jLogger" class="org.apache.log4j.Logger"
  factory-method="getLogger">
    <constructor-arg index="0" value="com.point2.MyClass"/>
</bean>

I find that this approach actually adds flexibility to your logging options. For example, if you have a number of classes that should all contribute messages to a single log file, inject the same logging reference into each of them.

<bean id="julLogger" class="java.util.logging.Logger"
  factory-method="getLogger">
    <constructor-arg index="0" value="emailLogFile"/>
</bean>

<bean id="foo">
    <constructor-arg index="0" ref="julLogger"/>
</bean>

<bean id="bar">
    <constructor-arg index="0" ref="julLogger"/>
</bean>

In this case, it now makes sense to name this logger for the log file that it outputs to, and we are free to do so, without having to make sure that the change is caught in a number of classes. In addition, if we every have to refractor either of these classes (for example to move it into a different package), we can do so without fear of changing how our logging output works.

This is a specific example of the general principle that replacing static state with inverted dependencies can make your code more flexible and easier to test.

By Sean Reilly

Categories: Point2 - Technical Tags: ,

Test Driven Development with a Widescreen Monitor

December 9, 2009 Comments off

My Point2 workstation recently received a monitor upgrade; the old pair had had served me faithfully for many years, but they were worn out. So I am now the proud user of a pair of shiny new widescreen monitors.

The new monitors are much nicer than the previous monitors, and they are also a much higher resolution (1920 x 1080 vs 960 x 1280). Using them is a very enjoyable experience, but at first it seemed like the widescreen aspect ratio was wasting a lot of horizontal space:

Before: my widescreen monitor with a single file open

Before: my widescreen monitor with a single file open

Then a few days ago, in the middle of a pairing session, I discovered IntelliJ’s split screen feature. In my opinion, the combination of a wide screen monitor and split screen is a killer feature for TDD. If you assign your test code in one pane, and your production code to another pane, you can see your test and production code at the same time:

After: my widescreen with two files open simultaneously

To activate split screen choose “Split Vertically” from the Window menu, or right click on a document tab and choose “Split Vertically”.

IntelliJ works very nicely with this split screen configuration; intentions, “Go To Declaration”, and refactoring shortcuts jump nicely to the appropriate screen. Being able to read test and production code at the same time without clicking a button is especially valuable when pairing, as each half of the pair can be reading a different file. One important caveat: this only works correctly if each file is only open in one pane, and IntelliJ doesn’t enforce that for you.

Some of you might wonder what the second monitor is used for, and the answer is pair programming. We pair program all the time at Point2, so I run the two monitors as mirrors. I’ve also added an extra keyboard and mouse, and the combination allows each member of the pair to see exactly what’s going without craning their necks or stretching. It’s a really nice feature for a pairing station.

My "Pairing Station" - a pair friendly workstation

My pair-friendly workstation. My keyboard and trackball are on the left, and the developer pairing with me has their own keyboard, mouse, and monitor on the right. Either of us can drive without any inconvenience.

By: Sean Reilly

OSGi Adventures, Part 1 – A Vaadin front-end to OSGi Services

December 1, 2009 1 comment

Extending my recent experiments with the Vaadin framework, I decided I wanted to have a Vaadin front-end talking to a set of OSGi services on the back end. Initially, these will be running within the same OSGi container, which in this case is FUSE 4, the commercially supported variant of Apache ServiceMix.

One of my goals was to achieve a loose coupling between the Vaadin webapp and the backing services, so that new services can readily be added, started, stopped, and updated, all without any impact on the running Vaadin app. I also wanted to maintain the convenience of being able to run and tinker with the UI portion of my app by just doing a “mvn jetty:run”, so the app needed to be able to start even if it wasn’t inside the OSGi container.

Fortunately, doing all this is pretty easy, and in the next series of articles I’ll describe how I went about it, and where the good parts and bad parts of such an approach became obvious.

In this part, we’ll start by describing the Vaadin app, and how it calls the back-end services. In later parts, I’ll describe the evolution of the back-end services themselves, as I experimented with more sophisticated techniques.

Vaadin Dependency
I’m building all my apps with Apache Maven, so the first step was to create a POM file suitable for Vaadin. Fortunately, Vaadin is a single jar file, and trivial to add to the classpath. My POM needed this dependency:


<dependency>
  <groupId>vaadin</groupId>
  <artifactId>vaadin</artifactId>
  <version>6.1.3</version>
  <scope>system</scope>
  <systemPath>${basedir}/src/main/webapp/
       WEB-INF/lib/vaadin-6.1.3.jar
      </systemPath>
</dependency>

Here I’m using the trick of specifying a systemPath for my jar, instead of retrieving it on demand from a local Nexus repository or from the internet, but that’s just one way of doing it – the main thing is to get this one Vaadin jar on your path.

web.xml
Next I proceeded to tweak my web.xml to have my top-level Vaadin application object available on a URL. The main Vaadin object is an extension of a Servlet, so this is also very easy – here’s my web.xml in it’s entirety:

     
<?xml version="1.0"
     encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/
      2001/XMLSchema-instance" 
    xmlns="http://java.sun.com/xml/ns/javaee" xmlns:
      web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" 
    xsi:schemaLocation="http://java.sun.com/xml/ns/
       javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd&
    quot; id="WebApp_ID" version="2.5">
  <display-name>Admin</display-name>
  <servlet>
    <servlet-name>Admin</servlet-name>
    <servlet-class>com.vaadin.terminal.gwt.server.
         ApplicationServlet</servlet-class>
    <init-param>
      <param-name>application</param-name>
      <param-value>Admin</param-value>
    </init-param>
  </servlet>
  <servlet-mapping>
    <servlet-name>Admin</servlet-name>
    <url-pattern>/*</url-pattern>
  </servlet-mapping>
</web-app>

In this situation my “Admin” class is in the default package, which is not generally a good practice, but you get the idea.

Menu and MenuDispatcher
I then used the “Tree” class in Vaadin to build myself a nice tree-style menu, and added that to the layout on my main page. My main page has some chrome regions for a top banner and other assorted visual aids, then a left-side area where the menu lives, and a “main” area, where all the real action in the application will happen.

A class I called MenuDispatcher “listens” for events on the menu (e.g. the user clicking something), and does the appropriate action when a certain menu item is clicked.

Here’s the interesting bits from the MenuDispatcher – as you can see, it’s constructed with a reference to the “mainArea” layout when it’s first initialized.

     
public class MenuDispatcher implements
        ItemClickEvent.ItemClickListener {

    private VerticalLayout mainArea;

    public MenuDispatcher(VerticalLayout mainArea) {
        this.mainArea = mainArea;
    }

    public void itemClick(ItemClickEvent event) {
        if (event.getButton() ==
               ItemClickEvent.BUTTON_LEFT) {
            String selected =
                 event.getItemId().toString();

            if (selected.equals("create dealer")) {
                createDealer();
            } else if (selected.equals("edit
                 dealers")) {
                editDealer();
            } 
...
            }
            System.err.println("Selected "
               + event.getItemId());
        }
    }

    private void createDealer() {
        mainArea.removeAllComponents();
        Component form = new CreateDealerForm();
        mainArea.addComponent(form);
        mainArea.setComponentAlignment(form,
           Alignment.MIDDLE_CENTER);
        mainArea.requestRepaint();
    }

    private void editDealer() {
...
    }

...
}

Again, this code can be made more sophisticated – I’m thinking a little Spring magic could make adding new forms and such very convenient, but this gets us started.

Submitting the Form
The “CreateDealerForm” object referred to in the Dispatcher then builds a Vaadin “Form” class, much like the example form built in the “Book of Vaadin”. The only interesting part of my form was that I chose not to back it with a Bean class, which is an option with Vaadin forms. If you back with a bean, then you essentially bind the form to the bean, and the form fields are generated for you from the bean.

If I wanted to then send the corresponding bean to the back-end service, then binding the bean to the form would be a good way to go. Instead, however, I don’t want my back-end services to be sharing beans with my UI application at all. I’ll explain why and how later on in more detail.

The interesting part of my form, then, is how I handle the submission of the form:

Assuming I have a button on my form, defined like so:

     
Button okbutton = new Button("Submit",
    dealerForm, "commit");

I can add a listener to this button (again, using Vaadin’s magic ability to route the Ajax events to my Java code) like thus:

     
okbutton.addListener(new Button.ClickListener() {
   public void buttonClick(Button.ClickEvent event) {

    Map parameters = new HashMap();
    for (Object id: dealerForm.getItemPropertyIds()) {
       Field field = dealerForm.getField(id);
       parameters.put(id.toString(), field.getValue());
    }
    ServiceClient.call("dealer", "save", parameters, null, null);
    getWindow().showNotification("Dealer Saved");
  }
});

I’m using an anonymous inner class to listen to the event, and the “buttonClick” method gets called when the user says “Submit”.

The next steps are where the form meets the back-end service: First, I iterate over the form and build a map containing all the field values. The map is keyed with a string, for the name of the field or property, and the value in the map is the value the user entered. Note that these values are already typed – e.g. a checkbox can have a boolean, a TextField can have a string, a calendar field can have a java.util.Date. We retain these types, and wrap everything up in a map.

Now (after a quick println so I can see what’s going on), I call a static method on class called ServiceClient, sending along the name of the service I want to call, the operation on that service, and the parameter map I just built from my form.

The last line just shows a nice “fade away” non-modal notification to the user that the dealer he entered has been saved (assuming the call to ServiceClient didn’t throw an exception, which we’re assuming for the moment for simplicity).

So, now we have our “call” method to consider, where the map of values we built in our Vaadin front-end gets handed off to the appropriate back-end service.

Calling the Service

The only job of the ServiceClient object is to route a call from somewhere in our Vaadin app (in this case, the submission of a form) to the proper back-end service, and potentially return a response.

We identify our back-end services by a simple string, the “service name” (our first argument). The second argument to call tells us the “operation” we want from this service, as a single service can normally perform several different operations. For instance, our dealer service might have the ability to save a dealer, get a list of dealers, find a specific dealer, and so forth.

In Java parlance, a service operation might correspond to a method on the service interface, but we’re not coupling that tightly at this point – our service, after all, might not be written in Java for all we know at this point, and I prefer to keep it that way.

This is the “loose joint” in the mechanism between the UI and the back-end. To keep the joint loose, we don’t send a predefined Bean class to the back-end service to define the parameters to the service operation, we send a map, where that map contains a set of key/value pairs. The keys are always Strings, but the values can be any type – possibly even another map, for instance, which would allow us to express quite complex structures if required, in a type-independent fashion.

Let’s have a look at ServiceClient:

     
public class ServiceClient implements
      BundleActivator {

    private static DispatchService
       dispatchService = null;

    public void start(BundleContext context)
        throws Exception {
            ServiceReference reference =
                context.getServiceReference(
                   DispatchService.class.getName());
            if (reference == null)
                throw new RuntimeException(
                  "Cannot find the dispatch service");
            dispatchService =
                (DispatchService)
                context.getService(reference);
            if (dispatchService == null)
                throw new RuntimeException(
                    "Didn't find dispatch service");
    }

    public void stop(BundleContext context)
        throws Exception {
        System.out.println("Stopping bundle");
    }

    public static List<Map>
               call(String serviceName,
           String operation, Map params, String versionPattern,
           String securityToken) throws Exception {
          if (dispatchService == null) {
              System.out.println("No OSGi dispatch service available
                  - using dummy static data");
              return StaticDataService.call(serviceName, operation,
                params, versionPattern, securityToken);
          }
          return dispatchService.call(serviceName, operation,
                params, versionPattern, securityToken);
    }
}

Let’s go through that class piece by piece. First, you’ll notice that the class implements the BundleActivator interface – this tells the OSGi container that it is possible to call this class when the OSGi bundle containing it is started and stopped. During the start process, you can have the class receive a reference to the BundleContext. This is somewhat analagous to the Spring ApplicationContext, in that it gives our class access to the other services also deployed in OSGi. The Spring DM framework lets you do this kind of thing more declaratively, but it’s good to know how the low-level works before engaging the autopilot, I always find.

In order to allow BundleActivator to be found, we need another couple of things on our classpath, so we add this to our POM:

     
<dependency>
  <groupId>org.osgi</groupId>
  <artifactId>osgi_R4_core</artifactId>
  <version>1.0</version>
</dependency>

<dependency>
  <groupId>org.osgi</groupId>
  <artifactId>osgi_R4_compendium</artifactId>
  <version>1.0</version>
</dependency>

This defines the BundleActivator interface and other OSGi requirements which we use.

As you can see, we use our reference to the OSGi context to get a ServiceReference to an interface called “DispatchService”. We’ll examine dispatch service in detail in my next posting, but for now you can see we hold on to our reference as an instance variable.

When the call to the “call” method happens, our DispatchService reference is already available if we’re running inside an OSGi container, all wired up and ready to go. To give us flexibility, though, we also allow for the case where dispatch service is null, meaning we’re not running inside and OSGi container.

Instead of crashing and burning, however, we simply redirect our call to a “StaticDataService” class, which does just what you might expect. For every call it understands, it returns a static “canned” response. This allows us to build and test our UI without actually having written any of the back-end services, and to continue to run our Vaadin app with a simple “mvn jetty:run”, when all we’re working on is look and feel, or logic that only affects the UI.

This means my “cycle time” to test a change in the UI code is a matter of seconds – when I do a “mvn jetty:run” my code is compiled and up and running in my browser in about 5 seconds, and that’s on my 5 year-old macbook laptop, so there’s no penalty for not having a dynamic language in the mix here, from my point of view.

If DispatchService is not null, however, then we’re running “for real” inside an OSGi container, so we use our stored reference to the dispatch service to “forward on” our call. The dispatch service works it’s magic, which we’ll examine in a later post, and returns a List of Map objects with our response. This list might only contain one Map, of course, if the service was a simple one.

The response is then returned to the caller elsewhere in the Vaadin application, to do whatever is necessary from a UI point of view – perhaps populate a form or table with the response data.

As we’ll see in detail in my next post, the dispatch service in this case acts as a “barrier” between the UI-oriented code in our Vaadin app and our domain-specific application logic contained in our services. It is responsible for mapping our generic map of parameters into whatever domain beans are used by our back-end services, and for figuring out which of those services should be called, and which operation on that service is required. Those services then return whatever domain-specific objects they return, and the dispatcher grinds them into non-type-bound maps, or lists of maps, if there is a whole series of returns.

This means our Vaadin app only ever has to talk to one thing: the dispatcher. We only change our Vaadin code for one reason: to change the UI, never in response to a change of service code, even if the beans and classes of that service change significantly.

Next time we’ll look at the dispatch service and the first of our application-specific domain-aware services, and see how they come together.

By: Mike Nash

Specs and Selenium Together

August 25, 2009 1 comment

I recently had the chance to dive into a new project, this one with a rich web interface. In order to create acceptance tests around the (large and mostly untested) existing code, we’ve started writing specs acceptance tests.

Once we have our specs written to express what the existing functionality is, we can refactor and work on the codebase in more safety, our tests acting as a “motion detector” to let us know if we’ve broken something, while we write more detailed low-level tests (unit tests) to allow easier refactoring of smaller pieces of the application.

What’s interesting about our latest batch of specs is that they are written to express behaviours as experienced through a web browser – e.g. “when a user goes to this link and clicks this button on the page, he sees something happen”. In order to make this work we’ve paired up specs with Selenium, a well-known web testing framework.

By abstracting out the connection to Selenium into a parent Scala object, we can build a DSL-ish testing language that lets us say things like this:


object AUserChangesLanguages extends BaseSpecification {

  "a public user who visits the site" should beAbleTo {
    "Change their language to French" in {
      open("/")
      select("languageSelect", "value=fr")
      waitForPage
      location must include("/fr/")
    }
    "Change their language to German" in {
      select("languageSelect", "value=de")
      waitForPage
      location must include("/de/")
    }
    "Change their language to Polish" in {
      select("languageSelect", "value=pl")
      waitForPage
      location must include("/pl/")
    }
  }
}

This code simply expresses that as a user selects a language from a drop-down of languages, the page should refresh (via some Javascript on the page) and redirect them to a new URL. The new URL contains the language code, so we can tell we’ve arrived at the right page by the “location must include…” line.

Simple and expressive, these tests can be run with any of your choice of browsers (e.g. Firefox, Safari, or, if you insist, Internet Explorer).

Of course, there’s lots more to testing web pages, and we’re fleshing out our DSL day by day as it needs to express more sophisticated interactions with the application.

We can get elements of the page (via Xpath), make assertions about their values, click on things, type things into fields and submit forms, basically all the operations a user might want to do with a web application.

There are some frustrations, of course. The Xpath implementation on different browsers works a bit differently – well, ok, to be fair, it works on all browsers except Internet Exploder, where it fails in various frustrating ways. We’re working on ways to overcome this that don’t involve having any “if browser == ” kind of logic.

It’s also necessary to start the Selenium RC server before running the specs, but a bit of Ant magic fixes this.

We’ve got these specs running on our TeamCity continuous integration server, using the TeamCity runner supplied with Specs, where we get nicely formatted reports as to what’s pending (e.g. not finished being written yet), what’s passing, and what’s failing.

The specs written with Selenium this way are also a bit slow, as they must actually wait in some cases for the browser (and the underlying app!) to catch up. When run with IE as the browser, they’re more than just a bit slow, in fact…

They are, however, gratifyingly black-box, as they don’t have any connection to the code of the running application at all. For that matter, the application under test can be written in any language at all, and in this case is a combination of J2EE/JSP and some .NET.

There’s a lot of promise in this type of testing, even with it’s occasional frustrations and limitations, and I suspect we’ll be doing a lot more of it.

By: Mike Nash

Scala Continuous Testing with sbt

July 27, 2009 Comments off

I’ve recently had occasion to start an open source project, and the correct tool for the job appears to be Scala.

So far the project is going well, but the pain has been around the build and IDE support for rapid and convenient development in Scala. Although all three of the major IDEs I’ve worked with recently (Eclipse, IntelliJ IDEA and Netbeans) have plugins for Scala, they are all early releases, and have various degrees of pain associated with them.

I ended up using Netbeans for editing and as a subversion client, then building with Maven when I wanted to compile and/or run tests. Calling Maven from within Netbeans to build a Scala project is still a bit creaky, so I was doing it from a terminal window directly.

This is very inconvenient, for a number of reasons. First, I’m working in a Behaviour-Driven Development mode, using specs as my BDD framework. This means I first write a specification in specs, see it fail, then write the code necessary to make it pass, then write (or extend) the next specification for the next behaviour I want, and so forth.

When I want to run a test, I had to flip to a command window, issue a Maven command to build and select the specified test to run, something like this:


mvn -Dtest=foo test

In order to make this work I had to declare my specs as JUnit tests (with the @Test annotation), even though they don’t use anything else from JUnit. This felt like a bit of a hack, albeit a useful one. Another pain point was the startup time for Maven (although I understand there’s a “console” plugin for Maven as well that can perhaps reduce this particular pain).

As I like to tinker with new stuff, I thought I’d make a departure from Maven and give sbt a try. Sbt is a build tool written in Scala that supports building both Scala and Java (and mixed) projects in a very simple way. Unlike Ant, there’s no up-front pain to write a build script, though, as sbt can make reasonable assumptions (which you can override) about where to find your classes and libraries, so you hit the ground running.

In literally seconds I was up and running after following the install instructions on the sbt site. After a bit of experimenting I found the “console” mode in sbt, where you launch sbt and leave it running.

Once in console mode you can either just type “test” every time you want to build and run all tests, or be more selective, and run only the tests that failed last time, or just a single specified test, if you’re working on just one feature. Any of these operations are fast – mostly because sbt is already loaded and running, but also because sbt does a bit less work then Maven does on every build.

Although sbt can be configured to work in conjunction with Ivy or Maven repositories, you can also just drop your dependency libs in to “lib” directory in your project. For open source this is rather nice, as it saves the user of the project the trouble of trying to find them. Even supplying a Maven pom that specifies the repositories from which to download your dependencies is not a guarantee, as repositories change over time. Many is the time I’ve gone to download a dependency (or rather, Maven has gone to do it for me), only to find it’s not where it used to be, is a different name or version, or some other problem causes my build to fail. Like Ant, sbt can avoid this problem by keeping dependencies locally. Unlike Ant, it can also go get the dependencies the first time for you from the same repositories Maven uses – perhaps giving you the best of both words in some situations.

Even more interesting was the command


~ test

Which runs all the tests, then waits for any source code to change (test or main code). When it sees a change, it runs all the tests again (after compiling the changes, of course). Poor mans continuous testing 🙂

Wait, it gets even awesomer! When you say


~ test SomeTest

sbt will wait for any changes, then run just the specified test. This is ideal when you know you’re only working on a specific set of functionality (and therefore affecting only a single test). When sbt is waiting, you can just hit any key to return to the interactive mode, so it’s easy to change it from one of these modes to another.

Other commands in sbt are also very familiar and quick, such as “compile”, which does exactly as you’d expect from the name. “Package” is another good one – it produces a jar artifact, just like the Maven command of the same name. I haven’t yet tried it’s deploy mechanisms properly, but early results look promising.

I also like the “console” command, which runs the Scala command-line console, but with your project on the classpath, along with all it’s dependencies. This lets you do ad-hoc statements quickly and easily, and see the results right away. When you’re not sure what’s going on with a failing spec, I’ve found this mode very helpful to experiment. Scala is such an expressive language, I can write a quick experiment in one or two lines of code, see the result (as the Scala console also evaluates expressions by default), and go back to coding and testing, all without re-starting sbt. Quite nice, and somewhat reminiscent of the similar functionality in Rails and “irb” (and JRuby’s equivilant, Jirb).

There are many other things I’ve found about sbt that I like so far, but those are topics for another post later on….

By: Mike Nash

The Corporate Culture of Post-it Notes

July 8, 2009 Comments off

Ahh, the ubiquitous Post-It® Note.  My workspace is covered with lovely multi-coloured notes, or it was until I discovered Digital Notes. The unassuming Post-it has become a legend but I wanted to  share it again, as told in The Knowledge-Creating Company by Nonaka, and Takeuchi.

Knowledge_Creating_Company

“Art [Fry] sang in the church  choir and noticed that the slips of paper he inserted to mark selected hymns would fall out.  He decided to create a marker that would stick to the page but would peel off without damaging it.  He made use of a peel-able adhesive that Spence Silver at the Central Research Lab had developed four years previously, and made himself some prototypes of the self-attaching sheets of paper.

Sensing a market beyond just hymnal markers, Fry got permission to use a pilot plant and started working nights to develop a process for coating Silver’s adhesive on paper. When he was told that the machine he designed could take six months to make and cost a small fortune, he single-handledly built a crude version in his own basement overnight and brought it to work the next morning.  The machine worked.  But the marketing people did some surveys with potential customers, who said they didn’t feel the need for paper with a weak adhesive.  Fry said, “Even though I felt that there would be demand for the product, I didn’t know how to explain it in words.  Even if I found the words to explain, no one would understand…” Instead, Fry distributed samples within 3M and asked people to try them out.  The rest was history.  Post-it Notes became a sensation thanks to Art Fry’s entrepreneurial dedication and dogged persistence.

(Nonaka I, Takeuchi H. The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. 1995)

That entrepreneurial spirit has been part of 3M’s corporate culture almost since inception.  As stated in the William L. McKnight Management Principles:

“As our business grows, it becomes increasingly necessary to delegate responsibility and to encourage men and women to exercise their initiative. This requires considerable tolerance. Those men and women, to whom we delegate authority and responsibility, if they are good people, are going to want to do their jobs in their own way.

“Mistakes will be made. But if a person is essentially right, the mistakes he or she Post-itmakes are not as serious in the long run as the mistakes management will make if it undertakes to tell those in authority exactly how they must do their jobs.

“Management that is destructively critical when mistakes are made kills initiative. And it’s essential that we have many people with initiative if we are to continue to grow.”

Chris touched on this when he blogged about Failing Should be Easy and even Why don’t people like my ideas?!.  Art Fry was given the opportunity to fail.  When people didn’t like his idea, he proceeded to find a way to prove that his idea truly was great.

I’m proud that we here at Point2 allow people to fail; in fact, using Test Driven Development we ensure that everyone fails at first.

We give people time to explore and experiment, and everyone has some time for professional development.

Incidentally, Ken Schwaber’s early paper, “SCRUM Development Process” references heavily the work of Takeuchi and Nonaka and their description of a rugby organization style.

By: Kevin Bitinsky