Archive

Posts Tagged ‘java’

Easy Java toString() methods using Apache commons-lang ToStringBuilder

May 21, 2010 2 comments

One of the disciplines I try to have while developing Java code is always overriding the default toString, equals, and hashCode methods. I have found that most of the modern IDE’s have a way to generate these methods. My IDE of choice, IntelliJ IDEA, does have this functionality but I find it’s implementations to be sub par. Here is an example of an IntelliJ generated toString method for a class…

@Override
public String toString() {
    return "Person{" +
        "name=" + name +
        ", age=" + age +
        '}';
}

My biggest problems with this implementation are:

  1. Hard-coded, static strings for class and field names makes the amazing refactoring features of IntelliJ less reliable.
  2. Not using StringBuilder makes this code harder to read and slightly more resource intensive.
  3. Only fields of this class can be used; all inherited fields, even protected ones, are not included.

This was just not good enough. I started to investigate if anyone had written some kind of library which could reflectively look at an Object and produce a complete (private and inherited fields included) and consistent toString representation. Sure enough, I discovered that one of the Apache Commons libraries provided exactly this functionality. The commons-lang project has a class called ToStringBuilder which has a static method on it that uses reflection to generate a string which contains a all of a classes fields. The string’s format is consistent and can be controlled by the ToStringStyle class which has several useful formats built-in and also lets to implement your own. You can now write any object’s toString method as follows:

@Override
public String toString() {
    return ToStringBuilder.reflectionToString(this);
}

If you are using maven as your project management tool, you can add the following dependency into your pom.xml file in order to be able to use the commons-lang library in your project.

<dependency>
  <groupId>commons-lang</groupId>
  <artifactId>commons-lang</artifactId>
  <version>2.5</version>
</dependency>

Normally I would try to avoid reflection in production code when possible due to it’s reported slower performance, however I find the only time I am actually using toString methods is usually in debugging or testing where speed is not my number one concern. If you don’t feel reflection is a good solution for you, the ToStringBuilder class does have several instance methods which follow the traditional ‘builder’ pattern which allows you to manually choose the fields which will be included in the string representation of your object. Here is example of that usage pattern:

public String toString() {
    return new ToStringBuilder(this).
        append("name", name).
        append("age", age).
        append("smoker", smoker).
        toString();
}

This usage does have some of the same problems as the original implementation like it requires the field identifiers to be static strings. This does however allow you to ensure a consistent format of the resulting string and it also has the ability to include the any super classes’ toString result using the appendSuper() method.

If you are using Eclipse or NetBeans, I understand there are very good plugins which can build very good toString representations but this functionality is unfortunately not enough for me to give up the refactoring support IntelliJ provides. The above solution would also help though if you have some developers on a team using different IDE; using a library to toString representations is better than IDE dependent tools since it maintains consistency between environments.

The ToStringBuilder class is just one of many useful utility classes provided by the commons-lang library and the commons-lang library is just one of the useful libraries provided by the Apache Commons project. Make sure to check it out anytime you find yourself writing some code and asking yourself “Hasn’t someone done this already?”

OSGi Adventures, Part 1 – A Vaadin front-end to OSGi Services

December 1, 2009 1 comment

Extending my recent experiments with the Vaadin framework, I decided I wanted to have a Vaadin front-end talking to a set of OSGi services on the back end. Initially, these will be running within the same OSGi container, which in this case is FUSE 4, the commercially supported variant of Apache ServiceMix.

One of my goals was to achieve a loose coupling between the Vaadin webapp and the backing services, so that new services can readily be added, started, stopped, and updated, all without any impact on the running Vaadin app. I also wanted to maintain the convenience of being able to run and tinker with the UI portion of my app by just doing a “mvn jetty:run”, so the app needed to be able to start even if it wasn’t inside the OSGi container.

Fortunately, doing all this is pretty easy, and in the next series of articles I’ll describe how I went about it, and where the good parts and bad parts of such an approach became obvious.

In this part, we’ll start by describing the Vaadin app, and how it calls the back-end services. In later parts, I’ll describe the evolution of the back-end services themselves, as I experimented with more sophisticated techniques.

Vaadin Dependency
I’m building all my apps with Apache Maven, so the first step was to create a POM file suitable for Vaadin. Fortunately, Vaadin is a single jar file, and trivial to add to the classpath. My POM needed this dependency:


<dependency>
  <groupId>vaadin</groupId>
  <artifactId>vaadin</artifactId>
  <version>6.1.3</version>
  <scope>system</scope>
  <systemPath>${basedir}/src/main/webapp/
       WEB-INF/lib/vaadin-6.1.3.jar
      </systemPath>
</dependency>

Here I’m using the trick of specifying a systemPath for my jar, instead of retrieving it on demand from a local Nexus repository or from the internet, but that’s just one way of doing it – the main thing is to get this one Vaadin jar on your path.

web.xml
Next I proceeded to tweak my web.xml to have my top-level Vaadin application object available on a URL. The main Vaadin object is an extension of a Servlet, so this is also very easy – here’s my web.xml in it’s entirety:

     
<?xml version="1.0"
     encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/
      2001/XMLSchema-instance" 
    xmlns="http://java.sun.com/xml/ns/javaee" xmlns:
      web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" 
    xsi:schemaLocation="http://java.sun.com/xml/ns/
       javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd&
    quot; id="WebApp_ID" version="2.5">
  <display-name>Admin</display-name>
  <servlet>
    <servlet-name>Admin</servlet-name>
    <servlet-class>com.vaadin.terminal.gwt.server.
         ApplicationServlet</servlet-class>
    <init-param>
      <param-name>application</param-name>
      <param-value>Admin</param-value>
    </init-param>
  </servlet>
  <servlet-mapping>
    <servlet-name>Admin</servlet-name>
    <url-pattern>/*</url-pattern>
  </servlet-mapping>
</web-app>

In this situation my “Admin” class is in the default package, which is not generally a good practice, but you get the idea.

Menu and MenuDispatcher
I then used the “Tree” class in Vaadin to build myself a nice tree-style menu, and added that to the layout on my main page. My main page has some chrome regions for a top banner and other assorted visual aids, then a left-side area where the menu lives, and a “main” area, where all the real action in the application will happen.

A class I called MenuDispatcher “listens” for events on the menu (e.g. the user clicking something), and does the appropriate action when a certain menu item is clicked.

Here’s the interesting bits from the MenuDispatcher – as you can see, it’s constructed with a reference to the “mainArea” layout when it’s first initialized.

     
public class MenuDispatcher implements
        ItemClickEvent.ItemClickListener {

    private VerticalLayout mainArea;

    public MenuDispatcher(VerticalLayout mainArea) {
        this.mainArea = mainArea;
    }

    public void itemClick(ItemClickEvent event) {
        if (event.getButton() ==
               ItemClickEvent.BUTTON_LEFT) {
            String selected =
                 event.getItemId().toString();

            if (selected.equals("create dealer")) {
                createDealer();
            } else if (selected.equals("edit
                 dealers")) {
                editDealer();
            } 
...
            }
            System.err.println("Selected "
               + event.getItemId());
        }
    }

    private void createDealer() {
        mainArea.removeAllComponents();
        Component form = new CreateDealerForm();
        mainArea.addComponent(form);
        mainArea.setComponentAlignment(form,
           Alignment.MIDDLE_CENTER);
        mainArea.requestRepaint();
    }

    private void editDealer() {
...
    }

...
}

Again, this code can be made more sophisticated – I’m thinking a little Spring magic could make adding new forms and such very convenient, but this gets us started.

Submitting the Form
The “CreateDealerForm” object referred to in the Dispatcher then builds a Vaadin “Form” class, much like the example form built in the “Book of Vaadin”. The only interesting part of my form was that I chose not to back it with a Bean class, which is an option with Vaadin forms. If you back with a bean, then you essentially bind the form to the bean, and the form fields are generated for you from the bean.

If I wanted to then send the corresponding bean to the back-end service, then binding the bean to the form would be a good way to go. Instead, however, I don’t want my back-end services to be sharing beans with my UI application at all. I’ll explain why and how later on in more detail.

The interesting part of my form, then, is how I handle the submission of the form:

Assuming I have a button on my form, defined like so:

     
Button okbutton = new Button("Submit",
    dealerForm, "commit");

I can add a listener to this button (again, using Vaadin’s magic ability to route the Ajax events to my Java code) like thus:

     
okbutton.addListener(new Button.ClickListener() {
   public void buttonClick(Button.ClickEvent event) {

    Map parameters = new HashMap();
    for (Object id: dealerForm.getItemPropertyIds()) {
       Field field = dealerForm.getField(id);
       parameters.put(id.toString(), field.getValue());
    }
    ServiceClient.call("dealer", "save", parameters, null, null);
    getWindow().showNotification("Dealer Saved");
  }
});

I’m using an anonymous inner class to listen to the event, and the “buttonClick” method gets called when the user says “Submit”.

The next steps are where the form meets the back-end service: First, I iterate over the form and build a map containing all the field values. The map is keyed with a string, for the name of the field or property, and the value in the map is the value the user entered. Note that these values are already typed – e.g. a checkbox can have a boolean, a TextField can have a string, a calendar field can have a java.util.Date. We retain these types, and wrap everything up in a map.

Now (after a quick println so I can see what’s going on), I call a static method on class called ServiceClient, sending along the name of the service I want to call, the operation on that service, and the parameter map I just built from my form.

The last line just shows a nice “fade away” non-modal notification to the user that the dealer he entered has been saved (assuming the call to ServiceClient didn’t throw an exception, which we’re assuming for the moment for simplicity).

So, now we have our “call” method to consider, where the map of values we built in our Vaadin front-end gets handed off to the appropriate back-end service.

Calling the Service

The only job of the ServiceClient object is to route a call from somewhere in our Vaadin app (in this case, the submission of a form) to the proper back-end service, and potentially return a response.

We identify our back-end services by a simple string, the “service name” (our first argument). The second argument to call tells us the “operation” we want from this service, as a single service can normally perform several different operations. For instance, our dealer service might have the ability to save a dealer, get a list of dealers, find a specific dealer, and so forth.

In Java parlance, a service operation might correspond to a method on the service interface, but we’re not coupling that tightly at this point – our service, after all, might not be written in Java for all we know at this point, and I prefer to keep it that way.

This is the “loose joint” in the mechanism between the UI and the back-end. To keep the joint loose, we don’t send a predefined Bean class to the back-end service to define the parameters to the service operation, we send a map, where that map contains a set of key/value pairs. The keys are always Strings, but the values can be any type – possibly even another map, for instance, which would allow us to express quite complex structures if required, in a type-independent fashion.

Let’s have a look at ServiceClient:

     
public class ServiceClient implements
      BundleActivator {

    private static DispatchService
       dispatchService = null;

    public void start(BundleContext context)
        throws Exception {
            ServiceReference reference =
                context.getServiceReference(
                   DispatchService.class.getName());
            if (reference == null)
                throw new RuntimeException(
                  "Cannot find the dispatch service");
            dispatchService =
                (DispatchService)
                context.getService(reference);
            if (dispatchService == null)
                throw new RuntimeException(
                    "Didn't find dispatch service");
    }

    public void stop(BundleContext context)
        throws Exception {
        System.out.println("Stopping bundle");
    }

    public static List<Map>
               call(String serviceName,
           String operation, Map params, String versionPattern,
           String securityToken) throws Exception {
          if (dispatchService == null) {
              System.out.println("No OSGi dispatch service available
                  - using dummy static data");
              return StaticDataService.call(serviceName, operation,
                params, versionPattern, securityToken);
          }
          return dispatchService.call(serviceName, operation,
                params, versionPattern, securityToken);
    }
}

Let’s go through that class piece by piece. First, you’ll notice that the class implements the BundleActivator interface – this tells the OSGi container that it is possible to call this class when the OSGi bundle containing it is started and stopped. During the start process, you can have the class receive a reference to the BundleContext. This is somewhat analagous to the Spring ApplicationContext, in that it gives our class access to the other services also deployed in OSGi. The Spring DM framework lets you do this kind of thing more declaratively, but it’s good to know how the low-level works before engaging the autopilot, I always find.

In order to allow BundleActivator to be found, we need another couple of things on our classpath, so we add this to our POM:

     
<dependency>
  <groupId>org.osgi</groupId>
  <artifactId>osgi_R4_core</artifactId>
  <version>1.0</version>
</dependency>

<dependency>
  <groupId>org.osgi</groupId>
  <artifactId>osgi_R4_compendium</artifactId>
  <version>1.0</version>
</dependency>

This defines the BundleActivator interface and other OSGi requirements which we use.

As you can see, we use our reference to the OSGi context to get a ServiceReference to an interface called “DispatchService”. We’ll examine dispatch service in detail in my next posting, but for now you can see we hold on to our reference as an instance variable.

When the call to the “call” method happens, our DispatchService reference is already available if we’re running inside an OSGi container, all wired up and ready to go. To give us flexibility, though, we also allow for the case where dispatch service is null, meaning we’re not running inside and OSGi container.

Instead of crashing and burning, however, we simply redirect our call to a “StaticDataService” class, which does just what you might expect. For every call it understands, it returns a static “canned” response. This allows us to build and test our UI without actually having written any of the back-end services, and to continue to run our Vaadin app with a simple “mvn jetty:run”, when all we’re working on is look and feel, or logic that only affects the UI.

This means my “cycle time” to test a change in the UI code is a matter of seconds – when I do a “mvn jetty:run” my code is compiled and up and running in my browser in about 5 seconds, and that’s on my 5 year-old macbook laptop, so there’s no penalty for not having a dynamic language in the mix here, from my point of view.

If DispatchService is not null, however, then we’re running “for real” inside an OSGi container, so we use our stored reference to the dispatch service to “forward on” our call. The dispatch service works it’s magic, which we’ll examine in a later post, and returns a List of Map objects with our response. This list might only contain one Map, of course, if the service was a simple one.

The response is then returned to the caller elsewhere in the Vaadin application, to do whatever is necessary from a UI point of view – perhaps populate a form or table with the response data.

As we’ll see in detail in my next post, the dispatch service in this case acts as a “barrier” between the UI-oriented code in our Vaadin app and our domain-specific application logic contained in our services. It is responsible for mapping our generic map of parameters into whatever domain beans are used by our back-end services, and for figuring out which of those services should be called, and which operation on that service is required. Those services then return whatever domain-specific objects they return, and the dispatcher grinds them into non-type-bound maps, or lists of maps, if there is a whole series of returns.

This means our Vaadin app only ever has to talk to one thing: the dispatcher. We only change our Vaadin code for one reason: to change the UI, never in response to a change of service code, even if the beans and classes of that service change significantly.

Next time we’ll look at the dispatch service and the first of our application-specific domain-aware services, and see how they come together.

By: Mike Nash

First Look at Vaadin

November 22, 2009 8 comments

I’ve had the chance over a recent weekend to have a first crack at a web framework called Vaadin.

I was originally browsing for news about the latest release of Google’s GWT framework when I stumbled on a reference to Vaadin, and decided to go take a look. What I found intrigued me, and I decided to take it for a test drive, as I was down sick for a couple of days with a laptop nearby…. My back became annoyed with me, but it was worth it, I think.

First Look
First off, the practicalities: Vaadin is open source, and with a reasonable license, the Apache License. The essential bits of Vaadin are contained in a single JAR, and it’s both Ant and Maven friendly right out of the box.

The next thing that struck me about Vaadin was the documentation. The first unusual thing about it’s documentation was the fact of it’s existence, as open source projects are downright notorious for poor documentation. Vaadin is a pleasant exception, with tons of examples, a well-organized API doc, in the usual JavaDoc format, and even the “Book of Vaadin”, an entire PDF book (also available in hardcopy) that takes you through Vaadin in enough detail to be immediately productive.

Given that surprisingly pleasant start, I dug deeper, creating a little app of my own.

Just the Java, Ma’am
The main thing that kept me interested in Vaadin once I started digging further was that it’s pure Java. Many frameworks talk about writing your UI work in Java, such as Wicket, but there’s still a corresponding template and some wiring that happens to put the code and the template together. Not so with Vaadin.

When they say “just Java”, they mean it – your entire UI layer is coded in Java, plain and simple. No templates, no tag libraries, no Javascript, no ‘nuthin. It’s reminiscent of the Echo framework, except in Vaadin’s case the Javascript library that your code automatically produces is Google’s GWT, instead of Echo’s own Core.JS library.

Unlike GWT, though, the Vaadin approach doesn’t bind you to any specific source code though, it’s just a binary jar you put on your classpath.

The only thing in my sample app, other than 2 Java files, was a web.xml and a css stylesheet, both of which were only a few lines long. And this was no “Hello, World”, either, but a rich AJAX webapp with a tree menu, fancy non-modal “fading” notifications, images, complex layouts, and a form with build-in validation. And it took maybe 4 hours of total work to produce – and that was from a standing start, as I’d never heard of Vaadin before last Thursday. Not bad, not bad at all.

I found I was able to get a very capable little webapp up and running with no need to invent my own components, even though I had trees and sliders and menus and other assorted goodies on the page. It worked in every browser I was able to try it in, which is certainly not the case for my own hand-rolled JavaScript most of the time 🙂

I haven’t yet tried creating my own custom components, but it certainly looks straightforward enough.

I did try linking to external resources, and included non-Vaadin pages in my app, with no difficulties, so it appears that Vaadin plays well with others, and can be introduced into an existing project that uses, for instance, a whack of JSP’s that one might want to obsolete.

Webapps
I think Vaadin warrants more exploration, and I intend to put it further through its paces in the next few weeks. It appears extremely well-suited to web applications, as opposed to websites with a tiny bit of dynamic stuff in them.

It offers an interesting alternative to some of the patterns I’ve seen for advanced dynamic webapp development so far.

One approach I’ve seen a lot is to divide the duties of creating an app into the “back end” services and the “UI”. Generally the UI is written in either JavaScript, or uses Flex or some other semi-proprietary approach. The “back end” stuff is frequently written to expose it’s services as REST, then the two are bolted together. The pain point here happens when the two meet, as it’s common and easy to have minor (or major!) misunderstandings between the two teams. This usually results in a lot of to-and-fro to work out the differences before the app comes all the way to life.

The other approach, more common on smaller or resource-strapped teams, is to have the same group responsible for both UI and back-end services. This reduces the thrash in the joints a bit, but doesn’t eliminate it, because the two technologies on the two sides of the app aren’t the same. You can’t test JavaScript the same way you write Java, for instance, and they’re two different languages – one of which (Java) has far better tooling support than the other. IDE support, for instance, is superb for Java, and spotty at best for JavaScript.

With Vaadin, both of these approaches become unnecessary, as its the same technology all the way through (at least, what you write is – technically it’s still using JavaScript, but because that’s generated, I don’t count it).

You get to use all of the tools you know and love for the back-end services to write the code for the UI, which you can then unit and functional test to your heart’s content.

The temptation to mix concerns between UI code and back-end service code must still be resisted, of course, but at least that code isn’t buried somewhere in the middle of a JSP page, ready to leap out and bite you later.

Because you’re using dynamic layouts, the app always fits properly on the screen without any extra work, addressing a pet peeve of mine, the “skinny” webapp, restraining itself to the least common denominator of screen size, thus rendering impotent my nice wide monitors.

Scala
Just because Vaadin is a Java library doesn’t restrict you to using Java to drive it, however. I made another little webapp where the whole UI was defined in Scala, calling the Vaadin APIs, and it worked like a charm. In some ways, Scala is an even better fit for Vaadin than straight Java, I suspect. I haven’t tried any other JVM compatible language, but I see no reason they wouldn’t work equally well.

Deployment and Development Cycle
As I was building the app with Maven, I added a couple of lines to my POM and was able to say “mvn jetty:run” to get my Vaadin app up and running on my local box in a few seconds. My development cycle was only a few seconds between compile and interactive tests, as I was experimenting with the trial-and-error method.

TDD would be not only possible, but easy in this situation.

I successfully deployed my little Vaadin app to ServiceMix, my OSGi container of choice, without a hitch.

Performance appeared excellent overall, although I haven’t formally tested it with a load-testing tool (yet).

Summary
So far, I’m impressed with Vaadin. I’m more impressed with any web framework I’ve worked with in a number of years, in fact. I’m sure there are some warts in there somewhere, but for the benefits it brings to the table, I suspect they’re easily worth it. I think the advantages to teams that already speak fluent Java is hard to overstate, and the productivity to produce good-looking and functioning webapps is quite remarkable.

Over the next few weeks I’ll push it a bit harder with a complex example application, and see how it stacks up against other web app technologies I’ve worked with in a more realistic scenario.

By: Mike Nash

Basic Hibernate @OneToOne @PrimaryKeyJoinColumn Example With Maven and MySQL

November 14, 2009 7 comments

Recently we had a story which involved improving one of our data models. The table for the model had grown quite wide and we wanted to improve normalization and performance. We wanted to move a few columns from our original table (Listing) to a new table with the primary key of the new table (ListingLocation) also being a foreign key to the primary key of the original table, our one-to-one relationship. I will try to detail how we accomplished this change using a simplified example. Source code is linked at the bottom of this post.

Here is the entity relationship diagram of the old, single table structure:
Old ER Diagram
And here is the entity relationship diagram of the new, two table structure:
New ER Diagram

As you can see, it is a very simple example of a very common relationship in the database. However, what we found when implementing this in Hibernate was not a simple as I had hoped.

To get started here is the SQL used to represent our new tables:

CREATE TABLE Listing
(
id BIGINT(20) NOT NULL AUTO_INCREMENT,
price DECIMAL(10,2),
PRIMARY KEY (id)
) TYPE = INNODB;


CREATE TABLE ListingLocation
(
listingID BIGINT(20) NOT NULL,
address VARCHAR(255),
PRIMARY KEY(listingID),
INDEX (listingID),
FOREIGN KEY (listingID) REFERENCES Listing (id)
) TYPE = INNODB;

I’ve used MySQL for this example because it is free and easy to setup. InnoDB table types have been because they allow foreign keys, which is crucial to this example.

Our old Listing entity was pretty basic; it looked something like this (imports & getters/setters excluded):

@Entity
@Table
public class Listing implements Serializable {
@Id
@GeneratedValue
private long id;
@Column(columnDefinition= "DECIMAL(10,2)")
private double price;
private String address;
...

In this case, creating a new instance of Listing and persisting it is very easy. When we split the entities, it becomes a little more complicated. Here is what our entities looked like after being split:

@Entity
@Table
public class Listing implements Serializable {
@Id
@GeneratedValue
private long id;
@Column(columnDefinition= "DECIMAL(10,2)")
private double price;
@OneToOne
@PrimaryKeyJoinColumn
private ListingLocation listingLocation;
...

and

@Entity
@Table
public class ListingLocation implements Serializable {
@Id
@Column(name = "listingID")
private Long id;
private String address;
...

The differences are not large, but there are a couple important points to note:

  • Adding ListingLocation to Listing with @OneToOne & @PrimaryKeyJoinColumn annotations tells Hibernate the Listing has a one-to-one mapping with ListingLocation by using the Primary Key as the join column.
  • Adding @Id & @Column(name = “listingID”) annotations to our id field in ListingLocation tells Hibernate that id is, well, an ID, but that column should not be called “id” in the DB, but “listingID” as that will help an observer see the relationship quickly without looking closely at the schema. Also, it is good to note that the @GeneratedValue is not on “id” in ListingLocation as it is in Listing as we want to specify exactly what goes in that field.

The biggest gripe I have with using the one-to-one relationship is that we can no longer save Listing only. Our REAL Listing entity is far more complex with several relationships, but this was the first one-to-one relationship with Hibernate. Previously we could do something like:

Listing listing = new Listing();
listing.setPrice(price);
listing.setAddress(address);
listing.setFoo(foo); // where foo is a @OneToMany annotated entity
...
session.save(listing);

Now, we must save Listing and ListingLocation separately like this:

ListingLocation listingLocation = new ListingLocation();
listingLocation.setAddress(address);
Listing listing = new Listing();
listing.setPrice(price);
listing.setListingLocation(listingLocation);
...
session.save(listing);
if (listing.getListingLocation() != null) {
listing.getListingLocation().setId(listing.getId());
session.save(listing.getListingLocation()); // save ListingLocation
}

I guess I’m just a bit spoiled, but I was hoping that this would bit more automatic, as it is with the one-to-many/many-to-one relationships.

I have written a small app that uses these two entities and inserts a row into each table, the link is available at the bottom of this post. The config (hibernate.cfg.xml) expects that you have MySQL running on 127.0.0.1 with a DB named ‘OneToOneDemo’ with a user named ‘root’ and a password of ‘password’; I have included the SQL (Setup-OneToOneDemo.sql) to setup the DB. Just extract the contents of the archive, navigate to the project directory, and from the command line/terminal run:

mvn clean compile exec:java -Dexec.mainClass=com.point2.onetoonedemo.App -e

You should see the following output:

@OneToOne @PrimaryKeyJoinColumn Hibernate Demo
----------------------------------------------
...
Hibernate:
insert into Listing (price) values (?)

Hibernate:
insert into ListingLocation (address, listingID) values (?, ?)
Saved listing ID: 1

Download Source Code (12 KB .ZIP)

If you have any questions, comments, or suggestions on how to do better accomplish one-to-one Hibernate mappings, I would love to hear about them. If you have any problems getting the code to compile and/or run, please let me know and I will make the necessary changes. Everything should just work, providing you modify the hibernate config or setup your DB. I had a difficult time finding complete and recent documentation on this subject so I hope this post and the maven project will help.

By: Damien Gabrielson

Maven with included Dependencies

September 4, 2009 1 comment

One of the most common, and, in my opinion, valid criticism’s of Apache Maven is the way it handles dependencies.

The default way of doing dependencies in Maven is for a build to look for dependency jars in one of a specified number of places. If you don’t specify a location, it will start with your local Maven repository, then a stock set of online repositories (such as ibiblio).

What this means is that you can quickly and easily add a dependency to your project by just listing it in the POM, and Maven will frequently just go get it for you as required. For dependencies that don’t change rapidly (which is generally a bad idea for entirely separate reasons), the local copy is always used, so after the first download you don’t go get the jar again unless it changes.

The downside of this (and hence the basis for the criticism) is that your local repository is considered a cache, and is not checked in to your source control system. Maven aficionados believe that source control should be for what the name implies: source, not jars (which are an artifact, not source code). This is good, except in situations where the local repository is removed (as it’s a cache, and not “backed up” by being in source control) and you want to rebuild your project. Normally, no problem: Maven will simply go back out into the wilds of the internet and get your jars for you again. Except when it can’t, which is where the problem starts.

Let’s say your project depends on foo-1.0.jar, for the sake of discussion. Foo-1.0.jar is found in a repository at http://baz.com. No problem. Two months from now, you come back and make a change to your project, and your local repo doesn’t have foo-1.0.jar in it. Maven goes to get it for you, but baz.com has gone offline. Oops. Now you can’t build your project at all. Worse, let’s say you *do* find foo-1.0.jar, but it’s been updated. Granted, this is a gross violation of the version scheme, but as baz.com is not under your control, it *can* happen – and Murphy was an optimist.

The first level of defense against this is to have a local jar proxy that *is* backed up, like Nexus. Nexus not only provides a safe haven for the exact versions of Jars you need to build your project, but also proxies new jars automatically, when set up correctly. That way, once you use that external dependency, it’s available on the local proxy immediately forever more, in precisely the correct version.

This solution is so good, that I always consider Nexus and Maven to be two parts of one solution – as you don’t really have reliable and repeatable builds without Nexus in the mix.

Another way to go, however, is perhaps even more straightforward, and might be appropriate if you have a limited number of “internal” binary dependencies – that is, binary dependencies that you generate within your organization between projects.

This is the option I want to describe in this blog, and it takes a form very similar to the old Ant style of having a “lib” directory in your project, and checking jars into source control.

You simply create a directory (called lib or whatever you wish, but “lib” might be more standard), and put your jar dependencies in it. Then you refer to each dependency in your pom.xml file with a special syntax, like so:


    <dependency>
      <groupId>mygroup</groupId>
      <artifactId>myjar</artifactId>
      <version>1.0.0</version>
      <scope>system</scope>
      <systemPath>${basedir}/src/main/
         resources/lib/myjar-1.0.0.jar</systemPath>
    </dependency>

Now the myjar-1.0.0.jar will be on my classpath during compilation, easy as that. I can then check in that nicely versioned jar file, and have a fully self-contained project, while at the same time being able to leverage the power of Maven and it’s rich ecosystem of plugins. If you’re using the shade plugin, this is an excellent way to create a self-contained executable jar file containing all of your necessary dependencies in one shot, rather than having to worry about classpaths at runtime.

In web applications, you can do even better. By placing the dependency in the correct spot in your source tree, you can have it automatically included in the finished .war file, while at the same time finding it in your classpath during compile time, like so:


    <dependency>
      <groupId>xstream</groupId>
      <artifactId>xstream</artifactId>
      <version>1.3.1</version>
      <scope>system</scope>
      <systemPath>${basedir}/src/main/
           webapp/WEB-INF/lib/xstream-1.3.1.jar
           </systemPath>
    </dependency>

This includes the xstream jar in my webapp. Every IDE that can digest a POM file (which include my 3 main ones, IntelliJ IDEA, Eclipse IDE and Netbeans) also automagically find the dependencies.

One minor constraint here is that you must use a *versioned* jar file – e.g. it must be named according to the artifact-version.jar pattern. I consider this a feature, not a limitation, as I’m a firm believer that all artifacts should be properly versioned, so I can tell what source code originally created them. Yes, I know that can be done internally, in a manifest or such, but I like to see the label on the outside of the box, so to speak.

As one colleague put it, now you can be assured of coming back to your project months or years later, even if you don’t have internet connectivity, and be able to build the darn thing again, without going off on a wild jar chase all over the internet.

This pattern can overcome a serious and legitimate objection to Maven, and might be appropriate in many situations. Give it a try, let me know what you think!

By: Mike Nash

Specs and Selenium Together

August 25, 2009 1 comment

I recently had the chance to dive into a new project, this one with a rich web interface. In order to create acceptance tests around the (large and mostly untested) existing code, we’ve started writing specs acceptance tests.

Once we have our specs written to express what the existing functionality is, we can refactor and work on the codebase in more safety, our tests acting as a “motion detector” to let us know if we’ve broken something, while we write more detailed low-level tests (unit tests) to allow easier refactoring of smaller pieces of the application.

What’s interesting about our latest batch of specs is that they are written to express behaviours as experienced through a web browser – e.g. “when a user goes to this link and clicks this button on the page, he sees something happen”. In order to make this work we’ve paired up specs with Selenium, a well-known web testing framework.

By abstracting out the connection to Selenium into a parent Scala object, we can build a DSL-ish testing language that lets us say things like this:


object AUserChangesLanguages extends BaseSpecification {

  "a public user who visits the site" should beAbleTo {
    "Change their language to French" in {
      open("/")
      select("languageSelect", "value=fr")
      waitForPage
      location must include("/fr/")
    }
    "Change their language to German" in {
      select("languageSelect", "value=de")
      waitForPage
      location must include("/de/")
    }
    "Change their language to Polish" in {
      select("languageSelect", "value=pl")
      waitForPage
      location must include("/pl/")
    }
  }
}

This code simply expresses that as a user selects a language from a drop-down of languages, the page should refresh (via some Javascript on the page) and redirect them to a new URL. The new URL contains the language code, so we can tell we’ve arrived at the right page by the “location must include…” line.

Simple and expressive, these tests can be run with any of your choice of browsers (e.g. Firefox, Safari, or, if you insist, Internet Explorer).

Of course, there’s lots more to testing web pages, and we’re fleshing out our DSL day by day as it needs to express more sophisticated interactions with the application.

We can get elements of the page (via Xpath), make assertions about their values, click on things, type things into fields and submit forms, basically all the operations a user might want to do with a web application.

There are some frustrations, of course. The Xpath implementation on different browsers works a bit differently – well, ok, to be fair, it works on all browsers except Internet Exploder, where it fails in various frustrating ways. We’re working on ways to overcome this that don’t involve having any “if browser == ” kind of logic.

It’s also necessary to start the Selenium RC server before running the specs, but a bit of Ant magic fixes this.

We’ve got these specs running on our TeamCity continuous integration server, using the TeamCity runner supplied with Specs, where we get nicely formatted reports as to what’s pending (e.g. not finished being written yet), what’s passing, and what’s failing.

The specs written with Selenium this way are also a bit slow, as they must actually wait in some cases for the browser (and the underlying app!) to catch up. When run with IE as the browser, they’re more than just a bit slow, in fact…

They are, however, gratifyingly black-box, as they don’t have any connection to the code of the running application at all. For that matter, the application under test can be written in any language at all, and in this case is a combination of J2EE/JSP and some .NET.

There’s a lot of promise in this type of testing, even with it’s occasional frustrations and limitations, and I suspect we’ll be doing a lot more of it.

By: Mike Nash

Scala and Java Together

May 18, 2009 1 comment

Given a need to work towards more scalable systems, and to increase concurrency, many developers working in Java wonder what’s on the other side of the fence.

Languages such as Erlang and Haskell bring the functional programming style to the fore, as they essentially force you to think and write functionally – they simply don’t support (well, at least not easily) doing things in a non-functional fashion very easily.

The mental grinding of gears, however, can be considerable for developers coming from the Java (or C/C++/C#) worlds in many cases, who have become familiar with the Object-oriented paradigm. An excellent solution might be to consider the best of both worlds: Scala.

I had the opportunity this weekend to renew my acquaintance with Scala, and to work on blending it with the old mainstay, Java. Scala runs on the JVM, so as you might expect, it can be combined pretty readily with Java – more cleanly than before, given some new Maven and IDE plugins.

Details and screen shots here…

By: Mike Nash