Archive

Posts Tagged ‘testing’

Django’s assertRedirects little gotcha

April 23, 2010 1 comment

Something we’ve be trying to pay more attention to with our newest green field development projects is the running time of our unit test suites.  One of the projects was running ~200 unit tests in 2 seconds.  As development continued and the test case number grew, it started taking 10 seconds, then over 30 seconds. Something wasn’t right.

First challenge was to determine which were the slow running tests.  A little googling found this useful patch to the python code base.  Since we are using python 2.5 and virtual environments we decided to simply monkey patch it.  This makes verbosity level 2 spit out the run time for each test.  We then went one step further and made the following code change to _TextTestResult.addSuccess:

    def addSuccess(self, test):
        TestResult.addSuccess(self, test)
        if self.runTime > 0.1:
            self.stream.writeln("\nWarning: %s runs slow [%.3fs]" % (self.getDescription(test), self.runTime))
        if self.showAll:
            self.stream.writeln("[%.3fs] ok" % (self.runTime))
        elif self.dots:
            self.stream.write('.')

With it now easy to tell which were our slow tests we set out to make them all fast again. As expected the majority of the cases were an external service not being mocked correctly. Most of these were easily solved. But there were a few tests where we couldn’t find what hadn’t been mocked. Adding a few timing statements within these tests revealed the culprit. The Django frameworks assertRedirects method.

    def assertRedirects(self, response, expected_url, status_code=302,
                        target_status_code=200, host=None):
        """Asserts that a response redirected to a specific URL, and that the
        redirect URL can be loaded.

        Note that assertRedirects won't work for external links since it uses
        TestClient to do a request.
        """
        if hasattr(response, 'redirect_chain'):
            # The request was a followed redirect
            self.failUnless(len(response.redirect_chain) > 0,
                ("Response didn't redirect as expected: Response code was %d"
                " (expected %d)" % (response.status_code, status_code)))

            self.assertEqual(response.redirect_chain[0][1], status_code,
                ("Initial response didn't redirect as expected: Response code was %d"
                 " (expected %d)" % (response.redirect_chain[0][1], status_code)))

            url, status_code = response.redirect_chain[-1]

            self.assertEqual(response.status_code, target_status_code,
                ("Response didn't redirect as expected: Final Response code was %d"
                " (expected %d)" % (response.status_code, target_status_code)))

        else:
            # Not a followed redirect
            self.assertEqual(response.status_code, status_code,
                ("Response didn't redirect as expected: Response code was %d"
                 " (expected %d)" % (response.status_code, status_code)))

            url = response['Location']
            scheme, netloc, path, query, fragment = urlsplit(url)

            redirect_response = response.client.get(path, QueryDict(query))

            # Get the redirection page, using the same client that was used
            # to obtain the original response.
            self.assertEqual(redirect_response.status_code, target_status_code,
                ("Couldn't retrieve redirection page '%s': response code was %d"
                 " (expected %d)") %
                     (path, redirect_response.status_code, target_status_code))

        e_scheme, e_netloc, e_path, e_query, e_fragment = urlsplit(expected_url)
        if not (e_scheme or e_netloc):
            expected_url = urlunsplit(('http', host or 'testserver', e_path,
                e_query, e_fragment))

        self.assertEqual(url, expected_url,
            "Response redirected to '%s', expected '%s'" % (url, expected_url))

You’ll notice that if your get request uses the follow=False option you’ll end up at line 34 in this code snippet which will kindly check to make sure the page you are redirecting to returns a 200. Which is great, unless you don’t have the correct mocks for that page setup too. Mocking out the content for a page you aren’t actually testing also didn’t seem quite right. We didn’t care about the other page loading, it had it’s own test cases. We just wanted to make sure the page under test was redirecting to where we expected. Simple solution, write our own assertRedirects method.

def assertRedirectsNoFollow(self, response, expected_url):
    self.assertEqual(response._headers['location'], ('Location', settings.TESTSERVER + expected_url))
    self.assertEqual(response.status_code, 302)

Back to a 2 second unit test run time and all is right with the world again.

Dustin Bartlett

Advertisements

How to Avoid Auto Deployments Behaving Badly

December 18, 2009 4 comments

A couple of posts relating to development/ops gaps, and access control reminded me how very far we have come with our environments over the past few years.

Environment management in general comes with its own set of complications, but what I am interested in knowing is how other Systems/Operations departments are handling auto deployments. Several of the points in the articles above are moot for us and have been for some time. Access to production systems is heavily limited to our Systems Administrators and many of our products are deployed with scripts triggered by team city builds. Code or configuration files are not deployed manually to our end to end test environment, and when they are, generally an auto-deployment script needs to be touched to alter the outcome rather than a human touching a file. Don’t get me wrong, there are still blunders that occur from one environment to another “oh I forgot that we needed to add that file so I didn’t add it to the production release… oops”. However, my main concern is what to do when an auto deployment script behaves badly.

First, let us discuss what auto deployment is to our teams. When code is ready to release to a test or production environment, an auto deployment script will be executed to place the code on the correct server(s). In the case where code is going to production, a Systems Administrator is assigned a ticket to take a few preparation steps, then run the script to deploy the code. Depending on the environment and product, one or all of the following things could happen during an auto-deployment run:

  • Disable/enable monitors
  • Disable/enable nodes in a cluster
  • Run database scripts
  • Svn checkout or switch a directory or file
  • Copy files from one location to another
  • Start/stop services
  • Start/stop web containers

The first time I ran an auto-deployment release, I felt naked. I was so used to knowing every piece of what was actually required for a product to become live that pressing a button felt completely foreign. I worried that something would be missed, and I just wouldn’t know. So what should you do in a case like this? How do you mitigate the risk of the system being compromised? You’re running a script that you know little about other than you just typed several passwords into a config file on a secure server and it’s using your permissions to execute. However, because you have done this manually before, you know what the behavior and outcome of the script should be, so verify it.

The first few times we were vigilant. Check to make sure the node is out, review all the database scripts prior to release, check log files during and after the release, etc. Over time though, we started to become more comfortable with the release runs, and this is when the problems started to arise.

Auto-deployment scripts in recent months have been known to produce any one of the following symptoms:

  • Didn’t run the database scripts, or ran the incorrect database scripts
  • Didn’t remove the node from the cluster or didn’t put it back in
  • Deployed the wrong the version
  • Deployed the wrong application
  • Failed because of a spelling mistake
  • Showed “all green” in our monitoring system, but still caused a serious underlying problem

All these problems are compound, they are the responsibility of both the development and systems teams to realize and mitigate. The real root of the solution is to treat the auto-deployment script as exactly what it is: code. What do you do with code? You test it, you unit test it, and you make sure that it throws exceptions and failures when it doesn’t do what it is supposed to do. Most of the above problems could easily be avoided if the script was intelligent enough to know when it has made a mistake.

So, this is my list of remedies for saving an operations team from feeling naked when learning to run auto deployment scripts.

  • Get involved, have the script demoed to you and understand what it does before you run it. Ensure that every time it changes, you receive a new demo.
  • Don’t be naive, yes, maybe one day this script will run without someone watching over it, but before that happens, sys admins and developers should work out the kinks.
  • Review database scripts with a developer before the script is executed then check to ensure that one of the intended operations occurred.
  • Any time a problem is encountered post-release that is found to be the fault of the script, have the script altered to create a FAILURE state if it ever happens again. Additionally, review the script for other possible failures and have those adjusted too.
  • Check log files, review the script output and review the logs of your application containers.
  • Ask developers to check log files too, you will not always catch an error.
  • Consider having the script pause at critical stages (i.e. between nodes, etc) so that things can be inspected before they go too far.
  • Improve your monitoring systems. Create monitoring situations for the errors that get caused by the releases.

One day, after you have executed a script 10 times and it has not made a mistake, you will trust it. The important thing to remember is that the scripts will continue to evolve, so stay involved and help test every change. After all, as Systems Administrators or Operations staff, it is your job to ensure that the system is operational and take responsibility or alert the appropriate team when it is not.

Melanie Cey
Systems Administration Team Lead

OSGi Adventures, Part 1 – A Vaadin front-end to OSGi Services

December 1, 2009 1 comment

Extending my recent experiments with the Vaadin framework, I decided I wanted to have a Vaadin front-end talking to a set of OSGi services on the back end. Initially, these will be running within the same OSGi container, which in this case is FUSE 4, the commercially supported variant of Apache ServiceMix.

One of my goals was to achieve a loose coupling between the Vaadin webapp and the backing services, so that new services can readily be added, started, stopped, and updated, all without any impact on the running Vaadin app. I also wanted to maintain the convenience of being able to run and tinker with the UI portion of my app by just doing a “mvn jetty:run”, so the app needed to be able to start even if it wasn’t inside the OSGi container.

Fortunately, doing all this is pretty easy, and in the next series of articles I’ll describe how I went about it, and where the good parts and bad parts of such an approach became obvious.

In this part, we’ll start by describing the Vaadin app, and how it calls the back-end services. In later parts, I’ll describe the evolution of the back-end services themselves, as I experimented with more sophisticated techniques.

Vaadin Dependency
I’m building all my apps with Apache Maven, so the first step was to create a POM file suitable for Vaadin. Fortunately, Vaadin is a single jar file, and trivial to add to the classpath. My POM needed this dependency:


<dependency>
  <groupId>vaadin</groupId>
  <artifactId>vaadin</artifactId>
  <version>6.1.3</version>
  <scope>system</scope>
  <systemPath>${basedir}/src/main/webapp/
       WEB-INF/lib/vaadin-6.1.3.jar
      </systemPath>
</dependency>

Here I’m using the trick of specifying a systemPath for my jar, instead of retrieving it on demand from a local Nexus repository or from the internet, but that’s just one way of doing it – the main thing is to get this one Vaadin jar on your path.

web.xml
Next I proceeded to tweak my web.xml to have my top-level Vaadin application object available on a URL. The main Vaadin object is an extension of a Servlet, so this is also very easy – here’s my web.xml in it’s entirety:

     
<?xml version="1.0"
     encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/
      2001/XMLSchema-instance" 
    xmlns="http://java.sun.com/xml/ns/javaee" xmlns:
      web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" 
    xsi:schemaLocation="http://java.sun.com/xml/ns/
       javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd&
    quot; id="WebApp_ID" version="2.5">
  <display-name>Admin</display-name>
  <servlet>
    <servlet-name>Admin</servlet-name>
    <servlet-class>com.vaadin.terminal.gwt.server.
         ApplicationServlet</servlet-class>
    <init-param>
      <param-name>application</param-name>
      <param-value>Admin</param-value>
    </init-param>
  </servlet>
  <servlet-mapping>
    <servlet-name>Admin</servlet-name>
    <url-pattern>/*</url-pattern>
  </servlet-mapping>
</web-app>

In this situation my “Admin” class is in the default package, which is not generally a good practice, but you get the idea.

Menu and MenuDispatcher
I then used the “Tree” class in Vaadin to build myself a nice tree-style menu, and added that to the layout on my main page. My main page has some chrome regions for a top banner and other assorted visual aids, then a left-side area where the menu lives, and a “main” area, where all the real action in the application will happen.

A class I called MenuDispatcher “listens” for events on the menu (e.g. the user clicking something), and does the appropriate action when a certain menu item is clicked.

Here’s the interesting bits from the MenuDispatcher – as you can see, it’s constructed with a reference to the “mainArea” layout when it’s first initialized.

     
public class MenuDispatcher implements
        ItemClickEvent.ItemClickListener {

    private VerticalLayout mainArea;

    public MenuDispatcher(VerticalLayout mainArea) {
        this.mainArea = mainArea;
    }

    public void itemClick(ItemClickEvent event) {
        if (event.getButton() ==
               ItemClickEvent.BUTTON_LEFT) {
            String selected =
                 event.getItemId().toString();

            if (selected.equals("create dealer")) {
                createDealer();
            } else if (selected.equals("edit
                 dealers")) {
                editDealer();
            } 
...
            }
            System.err.println("Selected "
               + event.getItemId());
        }
    }

    private void createDealer() {
        mainArea.removeAllComponents();
        Component form = new CreateDealerForm();
        mainArea.addComponent(form);
        mainArea.setComponentAlignment(form,
           Alignment.MIDDLE_CENTER);
        mainArea.requestRepaint();
    }

    private void editDealer() {
...
    }

...
}

Again, this code can be made more sophisticated – I’m thinking a little Spring magic could make adding new forms and such very convenient, but this gets us started.

Submitting the Form
The “CreateDealerForm” object referred to in the Dispatcher then builds a Vaadin “Form” class, much like the example form built in the “Book of Vaadin”. The only interesting part of my form was that I chose not to back it with a Bean class, which is an option with Vaadin forms. If you back with a bean, then you essentially bind the form to the bean, and the form fields are generated for you from the bean.

If I wanted to then send the corresponding bean to the back-end service, then binding the bean to the form would be a good way to go. Instead, however, I don’t want my back-end services to be sharing beans with my UI application at all. I’ll explain why and how later on in more detail.

The interesting part of my form, then, is how I handle the submission of the form:

Assuming I have a button on my form, defined like so:

     
Button okbutton = new Button("Submit",
    dealerForm, "commit");

I can add a listener to this button (again, using Vaadin’s magic ability to route the Ajax events to my Java code) like thus:

     
okbutton.addListener(new Button.ClickListener() {
   public void buttonClick(Button.ClickEvent event) {

    Map parameters = new HashMap();
    for (Object id: dealerForm.getItemPropertyIds()) {
       Field field = dealerForm.getField(id);
       parameters.put(id.toString(), field.getValue());
    }
    ServiceClient.call("dealer", "save", parameters, null, null);
    getWindow().showNotification("Dealer Saved");
  }
});

I’m using an anonymous inner class to listen to the event, and the “buttonClick” method gets called when the user says “Submit”.

The next steps are where the form meets the back-end service: First, I iterate over the form and build a map containing all the field values. The map is keyed with a string, for the name of the field or property, and the value in the map is the value the user entered. Note that these values are already typed – e.g. a checkbox can have a boolean, a TextField can have a string, a calendar field can have a java.util.Date. We retain these types, and wrap everything up in a map.

Now (after a quick println so I can see what’s going on), I call a static method on class called ServiceClient, sending along the name of the service I want to call, the operation on that service, and the parameter map I just built from my form.

The last line just shows a nice “fade away” non-modal notification to the user that the dealer he entered has been saved (assuming the call to ServiceClient didn’t throw an exception, which we’re assuming for the moment for simplicity).

So, now we have our “call” method to consider, where the map of values we built in our Vaadin front-end gets handed off to the appropriate back-end service.

Calling the Service

The only job of the ServiceClient object is to route a call from somewhere in our Vaadin app (in this case, the submission of a form) to the proper back-end service, and potentially return a response.

We identify our back-end services by a simple string, the “service name” (our first argument). The second argument to call tells us the “operation” we want from this service, as a single service can normally perform several different operations. For instance, our dealer service might have the ability to save a dealer, get a list of dealers, find a specific dealer, and so forth.

In Java parlance, a service operation might correspond to a method on the service interface, but we’re not coupling that tightly at this point – our service, after all, might not be written in Java for all we know at this point, and I prefer to keep it that way.

This is the “loose joint” in the mechanism between the UI and the back-end. To keep the joint loose, we don’t send a predefined Bean class to the back-end service to define the parameters to the service operation, we send a map, where that map contains a set of key/value pairs. The keys are always Strings, but the values can be any type – possibly even another map, for instance, which would allow us to express quite complex structures if required, in a type-independent fashion.

Let’s have a look at ServiceClient:

     
public class ServiceClient implements
      BundleActivator {

    private static DispatchService
       dispatchService = null;

    public void start(BundleContext context)
        throws Exception {
            ServiceReference reference =
                context.getServiceReference(
                   DispatchService.class.getName());
            if (reference == null)
                throw new RuntimeException(
                  "Cannot find the dispatch service");
            dispatchService =
                (DispatchService)
                context.getService(reference);
            if (dispatchService == null)
                throw new RuntimeException(
                    "Didn't find dispatch service");
    }

    public void stop(BundleContext context)
        throws Exception {
        System.out.println("Stopping bundle");
    }

    public static List<Map>
               call(String serviceName,
           String operation, Map params, String versionPattern,
           String securityToken) throws Exception {
          if (dispatchService == null) {
              System.out.println("No OSGi dispatch service available
                  - using dummy static data");
              return StaticDataService.call(serviceName, operation,
                params, versionPattern, securityToken);
          }
          return dispatchService.call(serviceName, operation,
                params, versionPattern, securityToken);
    }
}

Let’s go through that class piece by piece. First, you’ll notice that the class implements the BundleActivator interface – this tells the OSGi container that it is possible to call this class when the OSGi bundle containing it is started and stopped. During the start process, you can have the class receive a reference to the BundleContext. This is somewhat analagous to the Spring ApplicationContext, in that it gives our class access to the other services also deployed in OSGi. The Spring DM framework lets you do this kind of thing more declaratively, but it’s good to know how the low-level works before engaging the autopilot, I always find.

In order to allow BundleActivator to be found, we need another couple of things on our classpath, so we add this to our POM:

     
<dependency>
  <groupId>org.osgi</groupId>
  <artifactId>osgi_R4_core</artifactId>
  <version>1.0</version>
</dependency>

<dependency>
  <groupId>org.osgi</groupId>
  <artifactId>osgi_R4_compendium</artifactId>
  <version>1.0</version>
</dependency>

This defines the BundleActivator interface and other OSGi requirements which we use.

As you can see, we use our reference to the OSGi context to get a ServiceReference to an interface called “DispatchService”. We’ll examine dispatch service in detail in my next posting, but for now you can see we hold on to our reference as an instance variable.

When the call to the “call” method happens, our DispatchService reference is already available if we’re running inside an OSGi container, all wired up and ready to go. To give us flexibility, though, we also allow for the case where dispatch service is null, meaning we’re not running inside and OSGi container.

Instead of crashing and burning, however, we simply redirect our call to a “StaticDataService” class, which does just what you might expect. For every call it understands, it returns a static “canned” response. This allows us to build and test our UI without actually having written any of the back-end services, and to continue to run our Vaadin app with a simple “mvn jetty:run”, when all we’re working on is look and feel, or logic that only affects the UI.

This means my “cycle time” to test a change in the UI code is a matter of seconds – when I do a “mvn jetty:run” my code is compiled and up and running in my browser in about 5 seconds, and that’s on my 5 year-old macbook laptop, so there’s no penalty for not having a dynamic language in the mix here, from my point of view.

If DispatchService is not null, however, then we’re running “for real” inside an OSGi container, so we use our stored reference to the dispatch service to “forward on” our call. The dispatch service works it’s magic, which we’ll examine in a later post, and returns a List of Map objects with our response. This list might only contain one Map, of course, if the service was a simple one.

The response is then returned to the caller elsewhere in the Vaadin application, to do whatever is necessary from a UI point of view – perhaps populate a form or table with the response data.

As we’ll see in detail in my next post, the dispatch service in this case acts as a “barrier” between the UI-oriented code in our Vaadin app and our domain-specific application logic contained in our services. It is responsible for mapping our generic map of parameters into whatever domain beans are used by our back-end services, and for figuring out which of those services should be called, and which operation on that service is required. Those services then return whatever domain-specific objects they return, and the dispatcher grinds them into non-type-bound maps, or lists of maps, if there is a whole series of returns.

This means our Vaadin app only ever has to talk to one thing: the dispatcher. We only change our Vaadin code for one reason: to change the UI, never in response to a change of service code, even if the beans and classes of that service change significantly.

Next time we’ll look at the dispatch service and the first of our application-specific domain-aware services, and see how they come together.

By: Mike Nash

First Look at Vaadin

November 22, 2009 8 comments

I’ve had the chance over a recent weekend to have a first crack at a web framework called Vaadin.

I was originally browsing for news about the latest release of Google’s GWT framework when I stumbled on a reference to Vaadin, and decided to go take a look. What I found intrigued me, and I decided to take it for a test drive, as I was down sick for a couple of days with a laptop nearby…. My back became annoyed with me, but it was worth it, I think.

First Look
First off, the practicalities: Vaadin is open source, and with a reasonable license, the Apache License. The essential bits of Vaadin are contained in a single JAR, and it’s both Ant and Maven friendly right out of the box.

The next thing that struck me about Vaadin was the documentation. The first unusual thing about it’s documentation was the fact of it’s existence, as open source projects are downright notorious for poor documentation. Vaadin is a pleasant exception, with tons of examples, a well-organized API doc, in the usual JavaDoc format, and even the “Book of Vaadin”, an entire PDF book (also available in hardcopy) that takes you through Vaadin in enough detail to be immediately productive.

Given that surprisingly pleasant start, I dug deeper, creating a little app of my own.

Just the Java, Ma’am
The main thing that kept me interested in Vaadin once I started digging further was that it’s pure Java. Many frameworks talk about writing your UI work in Java, such as Wicket, but there’s still a corresponding template and some wiring that happens to put the code and the template together. Not so with Vaadin.

When they say “just Java”, they mean it – your entire UI layer is coded in Java, plain and simple. No templates, no tag libraries, no Javascript, no ‘nuthin. It’s reminiscent of the Echo framework, except in Vaadin’s case the Javascript library that your code automatically produces is Google’s GWT, instead of Echo’s own Core.JS library.

Unlike GWT, though, the Vaadin approach doesn’t bind you to any specific source code though, it’s just a binary jar you put on your classpath.

The only thing in my sample app, other than 2 Java files, was a web.xml and a css stylesheet, both of which were only a few lines long. And this was no “Hello, World”, either, but a rich AJAX webapp with a tree menu, fancy non-modal “fading” notifications, images, complex layouts, and a form with build-in validation. And it took maybe 4 hours of total work to produce – and that was from a standing start, as I’d never heard of Vaadin before last Thursday. Not bad, not bad at all.

I found I was able to get a very capable little webapp up and running with no need to invent my own components, even though I had trees and sliders and menus and other assorted goodies on the page. It worked in every browser I was able to try it in, which is certainly not the case for my own hand-rolled JavaScript most of the time 🙂

I haven’t yet tried creating my own custom components, but it certainly looks straightforward enough.

I did try linking to external resources, and included non-Vaadin pages in my app, with no difficulties, so it appears that Vaadin plays well with others, and can be introduced into an existing project that uses, for instance, a whack of JSP’s that one might want to obsolete.

Webapps
I think Vaadin warrants more exploration, and I intend to put it further through its paces in the next few weeks. It appears extremely well-suited to web applications, as opposed to websites with a tiny bit of dynamic stuff in them.

It offers an interesting alternative to some of the patterns I’ve seen for advanced dynamic webapp development so far.

One approach I’ve seen a lot is to divide the duties of creating an app into the “back end” services and the “UI”. Generally the UI is written in either JavaScript, or uses Flex or some other semi-proprietary approach. The “back end” stuff is frequently written to expose it’s services as REST, then the two are bolted together. The pain point here happens when the two meet, as it’s common and easy to have minor (or major!) misunderstandings between the two teams. This usually results in a lot of to-and-fro to work out the differences before the app comes all the way to life.

The other approach, more common on smaller or resource-strapped teams, is to have the same group responsible for both UI and back-end services. This reduces the thrash in the joints a bit, but doesn’t eliminate it, because the two technologies on the two sides of the app aren’t the same. You can’t test JavaScript the same way you write Java, for instance, and they’re two different languages – one of which (Java) has far better tooling support than the other. IDE support, for instance, is superb for Java, and spotty at best for JavaScript.

With Vaadin, both of these approaches become unnecessary, as its the same technology all the way through (at least, what you write is – technically it’s still using JavaScript, but because that’s generated, I don’t count it).

You get to use all of the tools you know and love for the back-end services to write the code for the UI, which you can then unit and functional test to your heart’s content.

The temptation to mix concerns between UI code and back-end service code must still be resisted, of course, but at least that code isn’t buried somewhere in the middle of a JSP page, ready to leap out and bite you later.

Because you’re using dynamic layouts, the app always fits properly on the screen without any extra work, addressing a pet peeve of mine, the “skinny” webapp, restraining itself to the least common denominator of screen size, thus rendering impotent my nice wide monitors.

Scala
Just because Vaadin is a Java library doesn’t restrict you to using Java to drive it, however. I made another little webapp where the whole UI was defined in Scala, calling the Vaadin APIs, and it worked like a charm. In some ways, Scala is an even better fit for Vaadin than straight Java, I suspect. I haven’t tried any other JVM compatible language, but I see no reason they wouldn’t work equally well.

Deployment and Development Cycle
As I was building the app with Maven, I added a couple of lines to my POM and was able to say “mvn jetty:run” to get my Vaadin app up and running on my local box in a few seconds. My development cycle was only a few seconds between compile and interactive tests, as I was experimenting with the trial-and-error method.

TDD would be not only possible, but easy in this situation.

I successfully deployed my little Vaadin app to ServiceMix, my OSGi container of choice, without a hitch.

Performance appeared excellent overall, although I haven’t formally tested it with a load-testing tool (yet).

Summary
So far, I’m impressed with Vaadin. I’m more impressed with any web framework I’ve worked with in a number of years, in fact. I’m sure there are some warts in there somewhere, but for the benefits it brings to the table, I suspect they’re easily worth it. I think the advantages to teams that already speak fluent Java is hard to overstate, and the productivity to produce good-looking and functioning webapps is quite remarkable.

Over the next few weeks I’ll push it a bit harder with a complex example application, and see how it stacks up against other web app technologies I’ve worked with in a more realistic scenario.

By: Mike Nash

Specs and Selenium Together

August 25, 2009 1 comment

I recently had the chance to dive into a new project, this one with a rich web interface. In order to create acceptance tests around the (large and mostly untested) existing code, we’ve started writing specs acceptance tests.

Once we have our specs written to express what the existing functionality is, we can refactor and work on the codebase in more safety, our tests acting as a “motion detector” to let us know if we’ve broken something, while we write more detailed low-level tests (unit tests) to allow easier refactoring of smaller pieces of the application.

What’s interesting about our latest batch of specs is that they are written to express behaviours as experienced through a web browser – e.g. “when a user goes to this link and clicks this button on the page, he sees something happen”. In order to make this work we’ve paired up specs with Selenium, a well-known web testing framework.

By abstracting out the connection to Selenium into a parent Scala object, we can build a DSL-ish testing language that lets us say things like this:


object AUserChangesLanguages extends BaseSpecification {

  "a public user who visits the site" should beAbleTo {
    "Change their language to French" in {
      open("/")
      select("languageSelect", "value=fr")
      waitForPage
      location must include("/fr/")
    }
    "Change their language to German" in {
      select("languageSelect", "value=de")
      waitForPage
      location must include("/de/")
    }
    "Change their language to Polish" in {
      select("languageSelect", "value=pl")
      waitForPage
      location must include("/pl/")
    }
  }
}

This code simply expresses that as a user selects a language from a drop-down of languages, the page should refresh (via some Javascript on the page) and redirect them to a new URL. The new URL contains the language code, so we can tell we’ve arrived at the right page by the “location must include…” line.

Simple and expressive, these tests can be run with any of your choice of browsers (e.g. Firefox, Safari, or, if you insist, Internet Explorer).

Of course, there’s lots more to testing web pages, and we’re fleshing out our DSL day by day as it needs to express more sophisticated interactions with the application.

We can get elements of the page (via Xpath), make assertions about their values, click on things, type things into fields and submit forms, basically all the operations a user might want to do with a web application.

There are some frustrations, of course. The Xpath implementation on different browsers works a bit differently – well, ok, to be fair, it works on all browsers except Internet Exploder, where it fails in various frustrating ways. We’re working on ways to overcome this that don’t involve having any “if browser == ” kind of logic.

It’s also necessary to start the Selenium RC server before running the specs, but a bit of Ant magic fixes this.

We’ve got these specs running on our TeamCity continuous integration server, using the TeamCity runner supplied with Specs, where we get nicely formatted reports as to what’s pending (e.g. not finished being written yet), what’s passing, and what’s failing.

The specs written with Selenium this way are also a bit slow, as they must actually wait in some cases for the browser (and the underlying app!) to catch up. When run with IE as the browser, they’re more than just a bit slow, in fact…

They are, however, gratifyingly black-box, as they don’t have any connection to the code of the running application at all. For that matter, the application under test can be written in any language at all, and in this case is a combination of J2EE/JSP and some .NET.

There’s a lot of promise in this type of testing, even with it’s occasional frustrations and limitations, and I suspect we’ll be doing a lot more of it.

By: Mike Nash

Scala Continuous Testing with sbt

July 27, 2009 Comments off

I’ve recently had occasion to start an open source project, and the correct tool for the job appears to be Scala.

So far the project is going well, but the pain has been around the build and IDE support for rapid and convenient development in Scala. Although all three of the major IDEs I’ve worked with recently (Eclipse, IntelliJ IDEA and Netbeans) have plugins for Scala, they are all early releases, and have various degrees of pain associated with them.

I ended up using Netbeans for editing and as a subversion client, then building with Maven when I wanted to compile and/or run tests. Calling Maven from within Netbeans to build a Scala project is still a bit creaky, so I was doing it from a terminal window directly.

This is very inconvenient, for a number of reasons. First, I’m working in a Behaviour-Driven Development mode, using specs as my BDD framework. This means I first write a specification in specs, see it fail, then write the code necessary to make it pass, then write (or extend) the next specification for the next behaviour I want, and so forth.

When I want to run a test, I had to flip to a command window, issue a Maven command to build and select the specified test to run, something like this:


mvn -Dtest=foo test

In order to make this work I had to declare my specs as JUnit tests (with the @Test annotation), even though they don’t use anything else from JUnit. This felt like a bit of a hack, albeit a useful one. Another pain point was the startup time for Maven (although I understand there’s a “console” plugin for Maven as well that can perhaps reduce this particular pain).

As I like to tinker with new stuff, I thought I’d make a departure from Maven and give sbt a try. Sbt is a build tool written in Scala that supports building both Scala and Java (and mixed) projects in a very simple way. Unlike Ant, there’s no up-front pain to write a build script, though, as sbt can make reasonable assumptions (which you can override) about where to find your classes and libraries, so you hit the ground running.

In literally seconds I was up and running after following the install instructions on the sbt site. After a bit of experimenting I found the “console” mode in sbt, where you launch sbt and leave it running.

Once in console mode you can either just type “test” every time you want to build and run all tests, or be more selective, and run only the tests that failed last time, or just a single specified test, if you’re working on just one feature. Any of these operations are fast – mostly because sbt is already loaded and running, but also because sbt does a bit less work then Maven does on every build.

Although sbt can be configured to work in conjunction with Ivy or Maven repositories, you can also just drop your dependency libs in to “lib” directory in your project. For open source this is rather nice, as it saves the user of the project the trouble of trying to find them. Even supplying a Maven pom that specifies the repositories from which to download your dependencies is not a guarantee, as repositories change over time. Many is the time I’ve gone to download a dependency (or rather, Maven has gone to do it for me), only to find it’s not where it used to be, is a different name or version, or some other problem causes my build to fail. Like Ant, sbt can avoid this problem by keeping dependencies locally. Unlike Ant, it can also go get the dependencies the first time for you from the same repositories Maven uses – perhaps giving you the best of both words in some situations.

Even more interesting was the command


~ test

Which runs all the tests, then waits for any source code to change (test or main code). When it sees a change, it runs all the tests again (after compiling the changes, of course). Poor mans continuous testing 🙂

Wait, it gets even awesomer! When you say


~ test SomeTest

sbt will wait for any changes, then run just the specified test. This is ideal when you know you’re only working on a specific set of functionality (and therefore affecting only a single test). When sbt is waiting, you can just hit any key to return to the interactive mode, so it’s easy to change it from one of these modes to another.

Other commands in sbt are also very familiar and quick, such as “compile”, which does exactly as you’d expect from the name. “Package” is another good one – it produces a jar artifact, just like the Maven command of the same name. I haven’t yet tried it’s deploy mechanisms properly, but early results look promising.

I also like the “console” command, which runs the Scala command-line console, but with your project on the classpath, along with all it’s dependencies. This lets you do ad-hoc statements quickly and easily, and see the results right away. When you’re not sure what’s going on with a failing spec, I’ve found this mode very helpful to experiment. Scala is such an expressive language, I can write a quick experiment in one or two lines of code, see the result (as the Scala console also evaluates expressions by default), and go back to coding and testing, all without re-starting sbt. Quite nice, and somewhat reminiscent of the similar functionality in Rails and “irb” (and JRuby’s equivilant, Jirb).

There are many other things I’ve found about sbt that I like so far, but those are topics for another post later on….

By: Mike Nash

Levels of Testing

June 24, 2009 Comments off

I’ve had reason recently to do some thinking on the various “levels” of software testing. I think there’s a rough hierarchy here, but there’s some debate about the naming and terminology in some cases. The general principals are pretty well accepted, however, and I’d like to list them here and expound on what I think each level is all about.

An important concern in each of these levels is to achieve as high a level of automation as possible, along with some mechanism to report to the developers (or other stakeholders, as required) when tests are failing, in a way that doesn’t require them to go somewhere and look at something. I’m a big fan of flashing red lights and loud sirens, myself 🙂

Unit
Unit testing is one of the most common, and yet in many ways, misunderstood levels of test. I’ve got a separate rant/discussion in the works about TDD (and BDD), but suffice it to say that unit-level testing is a fundamental of test-driven development.

A unit test should test one class (at most – perhaps only part of a class). All other dependencies should be either mocked or stubbed out. If you are using Spring to autowire classes into your test, it’s definitely not a unit test – it’s at least a functional or integration test. There should be no databases or external storage involved – all of those are external and superfluous to a single class that you’re trying to verify is doing the right thing.

Another reason to write comprehensive unit tests is that it’s the easiest place to fix a bug: there are fewer moving parts, and when a simple unit tests breaks it should be entirely clear what’s wrong and what needs to be fixed and how to fix it.

As you go up the stack to more and more complex levels of testing, it becomes harder and harder to tell what broke and how to fix it.

Generally unit tests for any given module are executed as part of every build before a developer checks in code – sometimes this will also include some functional tests as well, but it’s generally a bad idea for any higher-level tests to be run before each and every check-in (due to the impact on developer cycle-time). Instead, you let your CI server handle that, often on a scheduled basis.

Functional
Some people suggest that functional and integration are not two separate types, but I’m separating them here. The key differentiation is that a functional test will likely span a number of classes in a single module, but not involve more than one executable unit. It likely will involve a few classes from within a single classpath space (e.g. from within a single jar or such). In the Java world (or other JVM-hosted languages), this means that a functional test is contained within a single VM instance.

This level might include tests that involve a database layer with an in-memory database, such as hypersonic – but they don’t use an *external* service, like MySQL – that would be an integration test, which we explore next.

Generally in a functional test we are not concerned with the low-level sequence of method or function calls, like we might be in a unit test. Instead, we’re doing more “black box” testing at this level, making sure that when we pour in the right inputs we get the right outputs out, and that when we supply invalid input that an appropriate level of error handling occurs, again, all within a single executable chunk.

Integration
As soon as you have a test that requires more than one executable to be running in order to test, it’s an integration test of some sort. This includes all tests that verify API contracts between REST or SOAP services, for instance, or anything that talks to an out-of-process database (as then you’re testing the integration between your app and the database server).

Ideally, this level of test should verify *just* the integration, not repeat the functionality of the unit tests exhaustively, otherwise they are redundant and not DRY.

In other words, you should be checking that the one service connects to the other, makes valid requests and gets valid responses, not comprehensively testing the content of the request or response – that’s what the unit and functional tests are for.

An example of an integration test is one where you fire up a copy of your application with an actual database engine and verify that the operation of your persistence layer is as expected, or where you start the client and server of your REST service and ensure that they exchange messages the way you wanted.

Acceptance
Acceptance tests often take the same form as a functional or integration test, but the author and audience are usually different: in this case an acceptance test should be authored by the story originator (the customer proxy, sometimes a business analyst), and should represent a narrative sequence of exercising various application functionality.

They are again not exhaustive in the way that unit tests attempt to be in that they don’t necessarily need to exercise all of the code, just the code required to support the narrative defined by a series of stories.

Fitnesse, Specs, Easyb, RSpec and Green Pepper are all tools designed to assist with this kind of testing.

Concurrency
If your application or service is designed to be used by more than one client or user, then it should be tested for concurrency. This is a test that simulates simultaneous concurrent load over a short period of time, and ensures that the replies from the service remain successful under that load.

For a concurrency test, we might verify just that the response contains some valid information, and not an error, as opposed to validating every element of the response as being correct (as this would again be an overlap with other layers of testing, and hence be redundant).

Performance
Performance, not to be confused with load and scalability, is a timing-based test. This is where you load your application (either with concurrent or sequential requests, depending on it’s intended purpose) with requests, and ensure that the requests receive a response within a specified time frame (often for interactive apps a rule is the “two second rule”, as it’s thought that users will tolerate a delay up to that level).

It’s important that performance tests be run singly and on an isolated system under a known load, or you will never get consistency from them.

Performance can be measured at various levels, but is most commonly checked at the integration or functional levels.

Load/Scalability
A close relative of, but not identical with concurrency tests are load and/or scalability tests. This is where you (automatically) pound on the app under a simulated user (or client) load, ideally more than it will experience in production, and make sure that it does not break. At this point you’re not concerned with how slow it goes, only that it doesn’t break – e.g. that you *can* scale, not that you can scale linearly or on any other performance curve.

Quality Assurance
Many Agile and Lean teams eschew a formal quality assurance group, and the testing such a group does, in favor of the concept of “built in” QA. Quality assurance, however, goes far beyond determining if the software perform as expected. I have a detailed post in the works that talks about how else we can measure the quality of the software we produce, as it’s a topic unto itself.

Alpha/Beta deployments
Not strictly testing at all, the deployment of alpha or beta versions of an application nonetheless relates to testing, even though it is far less formalized and rigorous than mechanized testing.

This is a good place to collect more subjective measures such as usability and perceived responsiveness.

Manual Tests
The bane of every agile project, manual tests should be avoided like the undying plague, IMO. Even the most obscure user interface has an automated tool for scripting the testing of the actual user experience – if nothing else, you should be recording any manual tests with such a tool, so that when it’s done you can “reply” the test without further manual interaction.

At each level of testing here, remember, have confidence in your tests and keep it DRY. Don’t test the same thing over and over again on purpose, let the natural overlap between the layers catch problems at the appropriate level, and when you find a problem, drive the test for it down as low as possible on this stack – ideally right back to the unit test level.

If all of your unit test are perfect and passing, you’d never see a failing test at any of the other levels, theoretically. I’ve never seen that kind of testing nirvana achieved entirely, but I’ve seen projects come close – and those were projects with a defect rate so low it was considered practically unattainable by other teams, yet it was done at a cost that was entirely reasonable.

By: Mike Nash