Archive for November, 2009

First Look at Vaadin

November 22, 2009 8 comments

I’ve had the chance over a recent weekend to have a first crack at a web framework called Vaadin.

I was originally browsing for news about the latest release of Google’s GWT framework when I stumbled on a reference to Vaadin, and decided to go take a look. What I found intrigued me, and I decided to take it for a test drive, as I was down sick for a couple of days with a laptop nearby…. My back became annoyed with me, but it was worth it, I think.

First Look
First off, the practicalities: Vaadin is open source, and with a reasonable license, the Apache License. The essential bits of Vaadin are contained in a single JAR, and it’s both Ant and Maven friendly right out of the box.

The next thing that struck me about Vaadin was the documentation. The first unusual thing about it’s documentation was the fact of it’s existence, as open source projects are downright notorious for poor documentation. Vaadin is a pleasant exception, with tons of examples, a well-organized API doc, in the usual JavaDoc format, and even the “Book of Vaadin”, an entire PDF book (also available in hardcopy) that takes you through Vaadin in enough detail to be immediately productive.

Given that surprisingly pleasant start, I dug deeper, creating a little app of my own.

Just the Java, Ma’am
The main thing that kept me interested in Vaadin once I started digging further was that it’s pure Java. Many frameworks talk about writing your UI work in Java, such as Wicket, but there’s still a corresponding template and some wiring that happens to put the code and the template together. Not so with Vaadin.

When they say “just Java”, they mean it – your entire UI layer is coded in Java, plain and simple. No templates, no tag libraries, no Javascript, no ‘nuthin. It’s reminiscent of the Echo framework, except in Vaadin’s case the Javascript library that your code automatically produces is Google’s GWT, instead of Echo’s own Core.JS library.

Unlike GWT, though, the Vaadin approach doesn’t bind you to any specific source code though, it’s just a binary jar you put on your classpath.

The only thing in my sample app, other than 2 Java files, was a web.xml and a css stylesheet, both of which were only a few lines long. And this was no “Hello, World”, either, but a rich AJAX webapp with a tree menu, fancy non-modal “fading” notifications, images, complex layouts, and a form with build-in validation. And it took maybe 4 hours of total work to produce – and that was from a standing start, as I’d never heard of Vaadin before last Thursday. Not bad, not bad at all.

I found I was able to get a very capable little webapp up and running with no need to invent my own components, even though I had trees and sliders and menus and other assorted goodies on the page. It worked in every browser I was able to try it in, which is certainly not the case for my own hand-rolled JavaScript most of the time 🙂

I haven’t yet tried creating my own custom components, but it certainly looks straightforward enough.

I did try linking to external resources, and included non-Vaadin pages in my app, with no difficulties, so it appears that Vaadin plays well with others, and can be introduced into an existing project that uses, for instance, a whack of JSP’s that one might want to obsolete.

I think Vaadin warrants more exploration, and I intend to put it further through its paces in the next few weeks. It appears extremely well-suited to web applications, as opposed to websites with a tiny bit of dynamic stuff in them.

It offers an interesting alternative to some of the patterns I’ve seen for advanced dynamic webapp development so far.

One approach I’ve seen a lot is to divide the duties of creating an app into the “back end” services and the “UI”. Generally the UI is written in either JavaScript, or uses Flex or some other semi-proprietary approach. The “back end” stuff is frequently written to expose it’s services as REST, then the two are bolted together. The pain point here happens when the two meet, as it’s common and easy to have minor (or major!) misunderstandings between the two teams. This usually results in a lot of to-and-fro to work out the differences before the app comes all the way to life.

The other approach, more common on smaller or resource-strapped teams, is to have the same group responsible for both UI and back-end services. This reduces the thrash in the joints a bit, but doesn’t eliminate it, because the two technologies on the two sides of the app aren’t the same. You can’t test JavaScript the same way you write Java, for instance, and they’re two different languages – one of which (Java) has far better tooling support than the other. IDE support, for instance, is superb for Java, and spotty at best for JavaScript.

With Vaadin, both of these approaches become unnecessary, as its the same technology all the way through (at least, what you write is – technically it’s still using JavaScript, but because that’s generated, I don’t count it).

You get to use all of the tools you know and love for the back-end services to write the code for the UI, which you can then unit and functional test to your heart’s content.

The temptation to mix concerns between UI code and back-end service code must still be resisted, of course, but at least that code isn’t buried somewhere in the middle of a JSP page, ready to leap out and bite you later.

Because you’re using dynamic layouts, the app always fits properly on the screen without any extra work, addressing a pet peeve of mine, the “skinny” webapp, restraining itself to the least common denominator of screen size, thus rendering impotent my nice wide monitors.

Just because Vaadin is a Java library doesn’t restrict you to using Java to drive it, however. I made another little webapp where the whole UI was defined in Scala, calling the Vaadin APIs, and it worked like a charm. In some ways, Scala is an even better fit for Vaadin than straight Java, I suspect. I haven’t tried any other JVM compatible language, but I see no reason they wouldn’t work equally well.

Deployment and Development Cycle
As I was building the app with Maven, I added a couple of lines to my POM and was able to say “mvn jetty:run” to get my Vaadin app up and running on my local box in a few seconds. My development cycle was only a few seconds between compile and interactive tests, as I was experimenting with the trial-and-error method.

TDD would be not only possible, but easy in this situation.

I successfully deployed my little Vaadin app to ServiceMix, my OSGi container of choice, without a hitch.

Performance appeared excellent overall, although I haven’t formally tested it with a load-testing tool (yet).

So far, I’m impressed with Vaadin. I’m more impressed with any web framework I’ve worked with in a number of years, in fact. I’m sure there are some warts in there somewhere, but for the benefits it brings to the table, I suspect they’re easily worth it. I think the advantages to teams that already speak fluent Java is hard to overstate, and the productivity to produce good-looking and functioning webapps is quite remarkable.

Over the next few weeks I’ll push it a bit harder with a complex example application, and see how it stacks up against other web app technologies I’ve worked with in a more realistic scenario.

By: Mike Nash


Was Our Velocity Seriously Zero?

November 20, 2009 Comments off

Two Sprints ago something happened for the first time since I have been a Team Lead – my team’s velocity was a big fat zero. As a team we are fairly experienced in Agile and Scrum and quite self-organizing, yet somehow our Scrum board was portraying us as a rookie crew. The first question that may come to your mind is, “did you guy’s have to deal with a lot of production bugs or injected stories?” Sadly the answer would be, no. The team obviously wanted to get to the bottom of this anomalous Sprint so performing a root cause analysis seemed like the right thing to do.

After working through a root cause analysis exercise as a team (this is actually something we do on a weekly basis) we had a pretty good idea of what had contributed to us delivering a big fat goose egg. There were no real surprises revealed, but it did help to get some stuff written down and in front of our faces. On a positive note, the team did agree that even though we didn’t burn up any stories, a lot of work actually did get done. So what went wrong?

  • The was a lot of technical unknowns in the new project we were working on resulting in stories with fuzzy boundaries.
  • The team was unsure when they were done a story, and as a result they dragged on longer than they should have.
  • Our stories were obviously poorly broken down.
  • We were not representing the work that was actually getting done properly on our Scrum board.
  • The knowledge gained from earlier spikes may not have been leveraged to their full potential.
  • We had gotten a little too comfortable in our Sprint planning process. Our previous projects (in the recent past) were usually well defined and understood. Because of this we found ourselves spending less time planning because we could get away with a just in time mentality. This didn’t work for our current project.

Naturally we want our root cause analysis exercises to result in some sort of action plan. With an understanding of what led to our questionable Sprint, the team was able to put some corrective measures into place for the start of the next Sprint.

  • For each story/task on the Scrum board, represent hurdles and stalls visually. Even though we talk about them in daily stand-ups, seeing them on the board can help show severity if it goes on for a long period of time.
  • During our kick-off meeting break down each story into small, bite-sized tasks and use these on the Scrum board. They should be small enough that we have no cards on the board that have a complexity value over 1 – the smallest size possible on our scale.
  • Do not end the kick-off meeting until all task cards are completed, prioritized based on importance and dependencies, and a commitment has been agreed upon.
  • If we find ourselves doing something that is not represented on the Scrum board,
    1. ask yourself if you should even be doing it.
    2. if you should be doing it, write up a new card and put it on the board to ensure that everything we do is tracked.
  • Continue with one week Sprints.

So how did the team fare after putting their plan into action? They met their commitment. The kick-off meeting resulted in a Sprint plan that was straight-forward and well defined. It provided a clear-cut set of tasks that the team could easily base their commitment on.  During the Sprint they were more mindful of the work at hand and made decisions as to whether it belonged in the iteration or not.  As a whole we re-focused on areas that we had gotten too comfortable with and neglected.

It was very satisfying to see the team face this problem head-on. They calmly analyzed it, devised an action plan, and executed it. They could have easily panicked, but they didn’t. They proved that they were a mature team that could recognize a problem, and use their agility to correct it.

By Hemant J. Naidu

Basic Hibernate @OneToOne @PrimaryKeyJoinColumn Example With Maven and MySQL

November 14, 2009 7 comments

Recently we had a story which involved improving one of our data models. The table for the model had grown quite wide and we wanted to improve normalization and performance. We wanted to move a few columns from our original table (Listing) to a new table with the primary key of the new table (ListingLocation) also being a foreign key to the primary key of the original table, our one-to-one relationship. I will try to detail how we accomplished this change using a simplified example. Source code is linked at the bottom of this post.

Here is the entity relationship diagram of the old, single table structure:
Old ER Diagram
And here is the entity relationship diagram of the new, two table structure:
New ER Diagram

As you can see, it is a very simple example of a very common relationship in the database. However, what we found when implementing this in Hibernate was not a simple as I had hoped.

To get started here is the SQL used to represent our new tables:

price DECIMAL(10,2),

CREATE TABLE ListingLocation
listingID BIGINT(20) NOT NULL,
address VARCHAR(255),
INDEX (listingID),
FOREIGN KEY (listingID) REFERENCES Listing (id)

I’ve used MySQL for this example because it is free and easy to setup. InnoDB table types have been because they allow foreign keys, which is crucial to this example.

Our old Listing entity was pretty basic; it looked something like this (imports & getters/setters excluded):

public class Listing implements Serializable {
private long id;
@Column(columnDefinition= "DECIMAL(10,2)")
private double price;
private String address;

In this case, creating a new instance of Listing and persisting it is very easy. When we split the entities, it becomes a little more complicated. Here is what our entities looked like after being split:

public class Listing implements Serializable {
private long id;
@Column(columnDefinition= "DECIMAL(10,2)")
private double price;
private ListingLocation listingLocation;


public class ListingLocation implements Serializable {
@Column(name = "listingID")
private Long id;
private String address;

The differences are not large, but there are a couple important points to note:

  • Adding ListingLocation to Listing with @OneToOne & @PrimaryKeyJoinColumn annotations tells Hibernate the Listing has a one-to-one mapping with ListingLocation by using the Primary Key as the join column.
  • Adding @Id & @Column(name = “listingID”) annotations to our id field in ListingLocation tells Hibernate that id is, well, an ID, but that column should not be called “id” in the DB, but “listingID” as that will help an observer see the relationship quickly without looking closely at the schema. Also, it is good to note that the @GeneratedValue is not on “id” in ListingLocation as it is in Listing as we want to specify exactly what goes in that field.

The biggest gripe I have with using the one-to-one relationship is that we can no longer save Listing only. Our REAL Listing entity is far more complex with several relationships, but this was the first one-to-one relationship with Hibernate. Previously we could do something like:

Listing listing = new Listing();
listing.setFoo(foo); // where foo is a @OneToMany annotated entity

Now, we must save Listing and ListingLocation separately like this:

ListingLocation listingLocation = new ListingLocation();
Listing listing = new Listing();
if (listing.getListingLocation() != null) {
listing.getListingLocation().setId(listing.getId());; // save ListingLocation

I guess I’m just a bit spoiled, but I was hoping that this would bit more automatic, as it is with the one-to-many/many-to-one relationships.

I have written a small app that uses these two entities and inserts a row into each table, the link is available at the bottom of this post. The config (hibernate.cfg.xml) expects that you have MySQL running on with a DB named ‘OneToOneDemo’ with a user named ‘root’ and a password of ‘password’; I have included the SQL (Setup-OneToOneDemo.sql) to setup the DB. Just extract the contents of the archive, navigate to the project directory, and from the command line/terminal run:

mvn clean compile exec:java -Dexec.mainClass=com.point2.onetoonedemo.App -e

You should see the following output:

@OneToOne @PrimaryKeyJoinColumn Hibernate Demo
insert into Listing (price) values (?)

insert into ListingLocation (address, listingID) values (?, ?)
Saved listing ID: 1

Download Source Code (12 KB .ZIP)

If you have any questions, comments, or suggestions on how to do better accomplish one-to-one Hibernate mappings, I would love to hear about them. If you have any problems getting the code to compile and/or run, please let me know and I will make the necessary changes. Everything should just work, providing you modify the hibernate config or setup your DB. I had a difficult time finding complete and recent documentation on this subject so I hope this post and the maven project will help.

By: Damien Gabrielson

Shuhari and the Evolution of the Team

November 13, 2009 Comments off

The concept of Shuhari was mentioned on more than one occasion while I was at Agile 2009 in Chicago this past summer. Alistair Cockburn was the first to mention it in his keynote speech I Come to Bury Agile, Not to Praise It. It then surfaced again when Declan Whelan spoke about Learning is the Key to Agile Success: Building a Learning Culture on Your Agile Team. Shuhari originated in Japan as a martial arts concept describing the stages of learning to mastery. The word is actually three words combined.

Shu – protect or obey. It is based on traditional wisdom where the student learns all they can from their master, and accepts any instruction and criticism they are offered. The master acts as a protector, steering the student and watching out for their best interests.

Ha – detach or digress. Up until this point the student did everything based on fundamental teachings. At this point the student will start to break with tradition and apply more imaginative techniques, building on what they have learned. They will also start to question and start to evolve based on their own personal experience.

Ri – leave or separate. At this point the learner transcends to a point where there are no longer specific techniques that are being followed since everything now come naturally. The student has learned everything they can from their master and leaves. The student bases their learning almost completely through self-discovery with no actual instruction.  Ideally the student should have surpassed the master, allowing for the art to progress.

I found this concept to be very intriguing and it got me thinking about how it applies closer to home.  The development team at Point2 is always in a state of learning. In many instances the student and the teacher roles are obvious. A new college graduate will likely be paired with a veteran developer in a mentoring role, while more experienced developers will pair with each other, hopefully learning from one another.  I really believe that the last point in the Ri description is key. In order for the art to evolve and progress, the student must surpass their teacher.

A dilemma arises when everyone reaches that Ri stage and no one is able to get back into the Shu mindset. If everyone believes there is nothing new to learn, an obvious problem is created – your team will likely fail because they will lose their ability to compete. This is where having a culture of learning is paramount. People have to think of ShuHaRi as a circular idea that has no beginning and no end. If an entire team is dedicated making their colleagues the best that they can be, and are willing to learn from each other, the possibility of stagnating as a team fades away.

How is Point2 attempting to keep Shuhari a cyclical pattern? We mentor our new recruits, putting trust in our veteran developers to lead them down the correct paths. We pair program nearly 100% of the time, ensuring that developers are always teaching each other new techniques and skills. We dedicate Friday afternoons to personal professional development. People can choose to present or lead a workshop for a group, get together to develop a small application, or just spend some me time reading a book or watching a webcast. The point being is that our development team understands the value in learning and can see the relationship between the development of their careers and the success of the business.  Shuhari may not be a well known concept in the halls of Point2, but that doesn’t mean it’s not alive and well.

By Hemant J. Naidu

Constant improvement is a Team Sport

November 13, 2009 Comments off

We focus pretty heavily on constant improvement here at Point2.  Improving our company, improving our departments, improving our teams, as well as individual improvement.  There are many things that we do well at the team and department levels, but they don’t always filter all the way down to the individual level.

The first thing to recognize is that while the desire to improve is a great first step, it doesn’t amount to anything unless you translate that desire in to action.  There are many different ways to do this, so i’m just going to stick with my favorites at the moment.  I generally hear people give the advice that you should write down your goals because it increases the likelihood that you will follow through.  Writing it down is a good first step, it forces you to accurately articulate what you are trying to do.  However, just writing it down isn’t enough motivation if you ask me.  If you really want to make the improvement, you need to tell people what you’re trying to do.

Trust your peers. They care about you and your improvement!  (if they don’t, find a new job!)  Telling your peers what you are currently trying to improve on will greatly increase your chances of success for two primary reasons.

The first reason is simply that you won’t want to let your peers down.  When you tell your team that you are trying to achieve a goal, they want to see you achieve it.  If you work on a great team that promotes improvement, you’re team now has a vested interest in seeing you make your desired improvements and reach your goals.  You’ll be much less likely to give up or procrastinate while they’re watching your progress.

This leads directly to the second reason telling your peers about your desired improvements and goals is beneficial.  As I said, they care about your improvement, and they have a vested interest in seeing you succeed, so they will HELP YOU.  All too often, people try to make improvements and reach goals in a vacuum.  You work for the team, let the team also work for you.

There is no medal or gold star for making an improvement solo without any outside help.

By: Chris Dagenais

Value AND Velocity!

November 13, 2009 Comments off

In last week’s post I talked about how a stakeholder can use Value Points as a useful metric, and how to assign Value Points to epics.  Value Points are good for determining whether the team is working on the right projects but they don’t paint the entire picture.  This is where Velocity points come in.

At Point2, we estimate story sizes based on relative complexity.  The simplest thing we can do is a 1, something about twice as difficult a 2, and so on.  Some people prefer time estimates, either method of sizing stories is fine here.

Once you have both Value and Size estimates for your stories, you can plot them on a scatter plot.  Here I plotted story size on the X axis, and Value on the Y axis.  Small size, high value stories are in the upper left quadrant, low value, large size stories are in the lower right.  In this example, I would probably get the team to do all but 3 of these stories, those being the ones with size 5, value 2 and 3, and size 3, value 1.


The interaction between value and size is an excellent way to make sure your team is delivering the right product.  High value, low complexity stories are great for delivering some quick value, possibly as a means of generating the capital to tackle the larger projects.  Low value, high complexity stories might be best left on the shelf, or alternate means of solving the problem sought.

By: Tefon Obchansky

MS SqlServer Row Value Concatenation

November 6, 2009 Comments off

Recently, a colleague and I found ourselves in the situation of needing to concatenate values from multiple rows of a MS SqlServer database table. We were trying to form a comma-delimited list of phone numbers for every customer in our database. One customer may have many phone numbers. Here is a simplified diagram visualizing the relationship between customer and phone numbers:


Customer - Phone Number Relationship

The format of data we were looking for was:

CustomerId PhoneNumbers
1 “(306) 555-1111”, “555-2222”, “306-555-3333”
2 “3065554444”

We tried various queries to perform the concatenation correctly but none of our solutions seemed to do the job perfectly; either the final ‘PhoneNumbers’ string would have an additional comma at the end or some other undesired effect. We finally came across an article explaining in great detail of how to use TSQL (the SQL engine behind SqlServer) to perform the operations we needed.

This article has various examples describing AND explaining how to do multiple row value concatenations such as explicit examples for “Concatenating values when the number of items is small and known upfront” and “Concatenating values when the number of items is not known”. The author even walks through a recursive solution or two.

We determined that we knew none of our customers had more than five phone numbers based on the fact that they were limited to the PhoneType enumeration which only has five values. This allowed us to use the articles first example to get our job done efficiently. Here is our final solution:

 '"' + MAX( CASE seq WHEN 1 THEN phoneNumber ELSE '' END ) + '",' +
 '"' + MAX( CASE seq WHEN 2 THEN phoneNumber ELSE '' END ) + '",' +
 '"' + MAX( CASE seq WHEN 3 THEN phoneNumber ELSE '' END ) + '",' +
 '"' + MAX( CASE seq WHEN 4 THEN phoneNumber ELSE '' END ) + '",' +
 '"' + MAX( CASE seq WHEN 5 THEN phoneNumber ELSE '' END ) + '"', ',""', '' ) as PhoneNumbers
 SELECT pn1.CustomerId, pn1.PhoneNumber, (
 FROM PhoneNumber pn2
 WHERE pn2.CustomerId = pn1.CustomerId
 AND pn2.phoneNumber <= pn1.phoneNumber)
 FROM PhoneNumber pn1) PhoneNumbersPerParty ( CustomerId, phoneNumber, seq )
GROUP BY CustomerId
By: Jesse Webb