Posts Tagged ‘Scala’

Removing empty XML elements in Scala

March 21, 2011 1 comment
import xml.transform.{RuleTransformer, RewriteRule}
import xml.{NodeSeq, Node, Elem}
import xml.Utility.trim

object RemoveEmptyElementsSpike {
  val x1 = <xml><empty/><nonEmpty>data</nonEmpty></xml>
  val x2 = <xml><nonEmpty name="legend"></nonEmpty><empty /></xml>
  val x3 = <xml><Empty><empty/></Empty><nonEmpty>data</nonEmpty></xml>
  val x4 = <xml><empty>
  val x5 = <xml><empty c={val s:String=null;s}/><nonEmpty name="f"/></xml>

  class RemoveEmptyTagsRule extends RewriteRule {
      override def transform(n: Node) = n match {
          case e @ Elem(prefix, label, attributes, scope, child @ _*) if
              (isEmptyElement(e)) => NodeSeq.Empty
          case other => other

  val myRule = new RuleTransformer(new RemoveEmptyTagsRule)

  private def isEmptyElement(n: Node): Boolean = n match {
      case e @ Elem(prefix, label, attributes, scope, child @ _*) if
          (e.text.isEmpty &&
            (e.attributes.isEmpty || e.attributes.forall(_.value == null))
           && e.child.isEmpty) => true
      case other => false

  def main(args: Array[String]) = {

/* Output
<xml><nonEmpty name="legend"></nonEmpty></xml>
<xml><nonEmpty name="f"></nonEmpty></xml>

Windows 7, IntelliJ 9.0.*, and the Scala plugin

August 5, 2010 Comments off

About a week ago I started having problems running tests in IntelliJ 9.0.1. I didn’t have many plugins installed, in fact, I only had one installed, the Scala plugin from JetBrains. After an update to the plugin and IntelliJ (to 9.0.2) whenever I would run tests the project failed to make, generating 100+ errors and 16 warnings. I contacted our helpdesk team, who installed 9.0.3 Ultimate Edition and still encountered the same errors. After a few hours trying to find the problem myself, I sought help from JetBrains.

As it turns out, I’m not the only person to experience this problem, and according to JetBrains the fix will require a patch for IntelliJ or the plugin itself; however during my correspondence with JetBrains, I found a fix myself.

First, go to and download the latest version of Scala. Extract it, it doesn’t matter where you extract it to, because all you need is to have it somewhere on your computer.

In IntelliJ, click the Project Structure icon and select Facets under Project Settings. Click the “+” to expand the Scala Facet, and click to select Scala (importer). Check the box labeled “Use Scala compiler libraries from specified jars” and update the fields to resemble mine in the screenshot below.

Apply the changes, and you should be good to go.

By Chris Powell

OSGi Adventures, Part 1 – A Vaadin front-end to OSGi Services

December 1, 2009 1 comment

Extending my recent experiments with the Vaadin framework, I decided I wanted to have a Vaadin front-end talking to a set of OSGi services on the back end. Initially, these will be running within the same OSGi container, which in this case is FUSE 4, the commercially supported variant of Apache ServiceMix.

One of my goals was to achieve a loose coupling between the Vaadin webapp and the backing services, so that new services can readily be added, started, stopped, and updated, all without any impact on the running Vaadin app. I also wanted to maintain the convenience of being able to run and tinker with the UI portion of my app by just doing a “mvn jetty:run”, so the app needed to be able to start even if it wasn’t inside the OSGi container.

Fortunately, doing all this is pretty easy, and in the next series of articles I’ll describe how I went about it, and where the good parts and bad parts of such an approach became obvious.

In this part, we’ll start by describing the Vaadin app, and how it calls the back-end services. In later parts, I’ll describe the evolution of the back-end services themselves, as I experimented with more sophisticated techniques.

Vaadin Dependency
I’m building all my apps with Apache Maven, so the first step was to create a POM file suitable for Vaadin. Fortunately, Vaadin is a single jar file, and trivial to add to the classpath. My POM needed this dependency:


Here I’m using the trick of specifying a systemPath for my jar, instead of retrieving it on demand from a local Nexus repository or from the internet, but that’s just one way of doing it – the main thing is to get this one Vaadin jar on your path.

Next I proceeded to tweak my web.xml to have my top-level Vaadin application object available on a URL. The main Vaadin object is an extension of a Servlet, so this is also very easy – here’s my web.xml in it’s entirety:

<?xml version="1.0"
<web-app xmlns:xsi="
    xmlns="" xmlns:
    quot; id="WebApp_ID" version="2.5">

In this situation my “Admin” class is in the default package, which is not generally a good practice, but you get the idea.

Menu and MenuDispatcher
I then used the “Tree” class in Vaadin to build myself a nice tree-style menu, and added that to the layout on my main page. My main page has some chrome regions for a top banner and other assorted visual aids, then a left-side area where the menu lives, and a “main” area, where all the real action in the application will happen.

A class I called MenuDispatcher “listens” for events on the menu (e.g. the user clicking something), and does the appropriate action when a certain menu item is clicked.

Here’s the interesting bits from the MenuDispatcher – as you can see, it’s constructed with a reference to the “mainArea” layout when it’s first initialized.

public class MenuDispatcher implements
        ItemClickEvent.ItemClickListener {

    private VerticalLayout mainArea;

    public MenuDispatcher(VerticalLayout mainArea) {
        this.mainArea = mainArea;

    public void itemClick(ItemClickEvent event) {
        if (event.getButton() ==
               ItemClickEvent.BUTTON_LEFT) {
            String selected =

            if (selected.equals("create dealer")) {
            } else if (selected.equals("edit
                 dealers")) {
            System.err.println("Selected "
               + event.getItemId());

    private void createDealer() {
        Component form = new CreateDealerForm();

    private void editDealer() {


Again, this code can be made more sophisticated – I’m thinking a little Spring magic could make adding new forms and such very convenient, but this gets us started.

Submitting the Form
The “CreateDealerForm” object referred to in the Dispatcher then builds a Vaadin “Form” class, much like the example form built in the “Book of Vaadin”. The only interesting part of my form was that I chose not to back it with a Bean class, which is an option with Vaadin forms. If you back with a bean, then you essentially bind the form to the bean, and the form fields are generated for you from the bean.

If I wanted to then send the corresponding bean to the back-end service, then binding the bean to the form would be a good way to go. Instead, however, I don’t want my back-end services to be sharing beans with my UI application at all. I’ll explain why and how later on in more detail.

The interesting part of my form, then, is how I handle the submission of the form:

Assuming I have a button on my form, defined like so:

Button okbutton = new Button("Submit",
    dealerForm, "commit");

I can add a listener to this button (again, using Vaadin’s magic ability to route the Ajax events to my Java code) like thus:

okbutton.addListener(new Button.ClickListener() {
   public void buttonClick(Button.ClickEvent event) {

    Map parameters = new HashMap();
    for (Object id: dealerForm.getItemPropertyIds()) {
       Field field = dealerForm.getField(id);
       parameters.put(id.toString(), field.getValue());
    }"dealer", "save", parameters, null, null);
    getWindow().showNotification("Dealer Saved");

I’m using an anonymous inner class to listen to the event, and the “buttonClick” method gets called when the user says “Submit”.

The next steps are where the form meets the back-end service: First, I iterate over the form and build a map containing all the field values. The map is keyed with a string, for the name of the field or property, and the value in the map is the value the user entered. Note that these values are already typed – e.g. a checkbox can have a boolean, a TextField can have a string, a calendar field can have a java.util.Date. We retain these types, and wrap everything up in a map.

Now (after a quick println so I can see what’s going on), I call a static method on class called ServiceClient, sending along the name of the service I want to call, the operation on that service, and the parameter map I just built from my form.

The last line just shows a nice “fade away” non-modal notification to the user that the dealer he entered has been saved (assuming the call to ServiceClient didn’t throw an exception, which we’re assuming for the moment for simplicity).

So, now we have our “call” method to consider, where the map of values we built in our Vaadin front-end gets handed off to the appropriate back-end service.

Calling the Service

The only job of the ServiceClient object is to route a call from somewhere in our Vaadin app (in this case, the submission of a form) to the proper back-end service, and potentially return a response.

We identify our back-end services by a simple string, the “service name” (our first argument). The second argument to call tells us the “operation” we want from this service, as a single service can normally perform several different operations. For instance, our dealer service might have the ability to save a dealer, get a list of dealers, find a specific dealer, and so forth.

In Java parlance, a service operation might correspond to a method on the service interface, but we’re not coupling that tightly at this point – our service, after all, might not be written in Java for all we know at this point, and I prefer to keep it that way.

This is the “loose joint” in the mechanism between the UI and the back-end. To keep the joint loose, we don’t send a predefined Bean class to the back-end service to define the parameters to the service operation, we send a map, where that map contains a set of key/value pairs. The keys are always Strings, but the values can be any type – possibly even another map, for instance, which would allow us to express quite complex structures if required, in a type-independent fashion.

Let’s have a look at ServiceClient:

public class ServiceClient implements
      BundleActivator {

    private static DispatchService
       dispatchService = null;

    public void start(BundleContext context)
        throws Exception {
            ServiceReference reference =
            if (reference == null)
                throw new RuntimeException(
                  "Cannot find the dispatch service");
            dispatchService =
            if (dispatchService == null)
                throw new RuntimeException(
                    "Didn't find dispatch service");

    public void stop(BundleContext context)
        throws Exception {
        System.out.println("Stopping bundle");

    public static List<Map>
               call(String serviceName,
           String operation, Map params, String versionPattern,
           String securityToken) throws Exception {
          if (dispatchService == null) {
              System.out.println("No OSGi dispatch service available
                  - using dummy static data");
              return, operation,
                params, versionPattern, securityToken);
          return, operation,
                params, versionPattern, securityToken);

Let’s go through that class piece by piece. First, you’ll notice that the class implements the BundleActivator interface – this tells the OSGi container that it is possible to call this class when the OSGi bundle containing it is started and stopped. During the start process, you can have the class receive a reference to the BundleContext. This is somewhat analagous to the Spring ApplicationContext, in that it gives our class access to the other services also deployed in OSGi. The Spring DM framework lets you do this kind of thing more declaratively, but it’s good to know how the low-level works before engaging the autopilot, I always find.

In order to allow BundleActivator to be found, we need another couple of things on our classpath, so we add this to our POM:



This defines the BundleActivator interface and other OSGi requirements which we use.

As you can see, we use our reference to the OSGi context to get a ServiceReference to an interface called “DispatchService”. We’ll examine dispatch service in detail in my next posting, but for now you can see we hold on to our reference as an instance variable.

When the call to the “call” method happens, our DispatchService reference is already available if we’re running inside an OSGi container, all wired up and ready to go. To give us flexibility, though, we also allow for the case where dispatch service is null, meaning we’re not running inside and OSGi container.

Instead of crashing and burning, however, we simply redirect our call to a “StaticDataService” class, which does just what you might expect. For every call it understands, it returns a static “canned” response. This allows us to build and test our UI without actually having written any of the back-end services, and to continue to run our Vaadin app with a simple “mvn jetty:run”, when all we’re working on is look and feel, or logic that only affects the UI.

This means my “cycle time” to test a change in the UI code is a matter of seconds – when I do a “mvn jetty:run” my code is compiled and up and running in my browser in about 5 seconds, and that’s on my 5 year-old macbook laptop, so there’s no penalty for not having a dynamic language in the mix here, from my point of view.

If DispatchService is not null, however, then we’re running “for real” inside an OSGi container, so we use our stored reference to the dispatch service to “forward on” our call. The dispatch service works it’s magic, which we’ll examine in a later post, and returns a List of Map objects with our response. This list might only contain one Map, of course, if the service was a simple one.

The response is then returned to the caller elsewhere in the Vaadin application, to do whatever is necessary from a UI point of view – perhaps populate a form or table with the response data.

As we’ll see in detail in my next post, the dispatch service in this case acts as a “barrier” between the UI-oriented code in our Vaadin app and our domain-specific application logic contained in our services. It is responsible for mapping our generic map of parameters into whatever domain beans are used by our back-end services, and for figuring out which of those services should be called, and which operation on that service is required. Those services then return whatever domain-specific objects they return, and the dispatcher grinds them into non-type-bound maps, or lists of maps, if there is a whole series of returns.

This means our Vaadin app only ever has to talk to one thing: the dispatcher. We only change our Vaadin code for one reason: to change the UI, never in response to a change of service code, even if the beans and classes of that service change significantly.

Next time we’ll look at the dispatch service and the first of our application-specific domain-aware services, and see how they come together.

By: Mike Nash

First Look at Vaadin

November 22, 2009 8 comments

I’ve had the chance over a recent weekend to have a first crack at a web framework called Vaadin.

I was originally browsing for news about the latest release of Google’s GWT framework when I stumbled on a reference to Vaadin, and decided to go take a look. What I found intrigued me, and I decided to take it for a test drive, as I was down sick for a couple of days with a laptop nearby…. My back became annoyed with me, but it was worth it, I think.

First Look
First off, the practicalities: Vaadin is open source, and with a reasonable license, the Apache License. The essential bits of Vaadin are contained in a single JAR, and it’s both Ant and Maven friendly right out of the box.

The next thing that struck me about Vaadin was the documentation. The first unusual thing about it’s documentation was the fact of it’s existence, as open source projects are downright notorious for poor documentation. Vaadin is a pleasant exception, with tons of examples, a well-organized API doc, in the usual JavaDoc format, and even the “Book of Vaadin”, an entire PDF book (also available in hardcopy) that takes you through Vaadin in enough detail to be immediately productive.

Given that surprisingly pleasant start, I dug deeper, creating a little app of my own.

Just the Java, Ma’am
The main thing that kept me interested in Vaadin once I started digging further was that it’s pure Java. Many frameworks talk about writing your UI work in Java, such as Wicket, but there’s still a corresponding template and some wiring that happens to put the code and the template together. Not so with Vaadin.

When they say “just Java”, they mean it – your entire UI layer is coded in Java, plain and simple. No templates, no tag libraries, no Javascript, no ‘nuthin. It’s reminiscent of the Echo framework, except in Vaadin’s case the Javascript library that your code automatically produces is Google’s GWT, instead of Echo’s own Core.JS library.

Unlike GWT, though, the Vaadin approach doesn’t bind you to any specific source code though, it’s just a binary jar you put on your classpath.

The only thing in my sample app, other than 2 Java files, was a web.xml and a css stylesheet, both of which were only a few lines long. And this was no “Hello, World”, either, but a rich AJAX webapp with a tree menu, fancy non-modal “fading” notifications, images, complex layouts, and a form with build-in validation. And it took maybe 4 hours of total work to produce – and that was from a standing start, as I’d never heard of Vaadin before last Thursday. Not bad, not bad at all.

I found I was able to get a very capable little webapp up and running with no need to invent my own components, even though I had trees and sliders and menus and other assorted goodies on the page. It worked in every browser I was able to try it in, which is certainly not the case for my own hand-rolled JavaScript most of the time 🙂

I haven’t yet tried creating my own custom components, but it certainly looks straightforward enough.

I did try linking to external resources, and included non-Vaadin pages in my app, with no difficulties, so it appears that Vaadin plays well with others, and can be introduced into an existing project that uses, for instance, a whack of JSP’s that one might want to obsolete.

I think Vaadin warrants more exploration, and I intend to put it further through its paces in the next few weeks. It appears extremely well-suited to web applications, as opposed to websites with a tiny bit of dynamic stuff in them.

It offers an interesting alternative to some of the patterns I’ve seen for advanced dynamic webapp development so far.

One approach I’ve seen a lot is to divide the duties of creating an app into the “back end” services and the “UI”. Generally the UI is written in either JavaScript, or uses Flex or some other semi-proprietary approach. The “back end” stuff is frequently written to expose it’s services as REST, then the two are bolted together. The pain point here happens when the two meet, as it’s common and easy to have minor (or major!) misunderstandings between the two teams. This usually results in a lot of to-and-fro to work out the differences before the app comes all the way to life.

The other approach, more common on smaller or resource-strapped teams, is to have the same group responsible for both UI and back-end services. This reduces the thrash in the joints a bit, but doesn’t eliminate it, because the two technologies on the two sides of the app aren’t the same. You can’t test JavaScript the same way you write Java, for instance, and they’re two different languages – one of which (Java) has far better tooling support than the other. IDE support, for instance, is superb for Java, and spotty at best for JavaScript.

With Vaadin, both of these approaches become unnecessary, as its the same technology all the way through (at least, what you write is – technically it’s still using JavaScript, but because that’s generated, I don’t count it).

You get to use all of the tools you know and love for the back-end services to write the code for the UI, which you can then unit and functional test to your heart’s content.

The temptation to mix concerns between UI code and back-end service code must still be resisted, of course, but at least that code isn’t buried somewhere in the middle of a JSP page, ready to leap out and bite you later.

Because you’re using dynamic layouts, the app always fits properly on the screen without any extra work, addressing a pet peeve of mine, the “skinny” webapp, restraining itself to the least common denominator of screen size, thus rendering impotent my nice wide monitors.

Just because Vaadin is a Java library doesn’t restrict you to using Java to drive it, however. I made another little webapp where the whole UI was defined in Scala, calling the Vaadin APIs, and it worked like a charm. In some ways, Scala is an even better fit for Vaadin than straight Java, I suspect. I haven’t tried any other JVM compatible language, but I see no reason they wouldn’t work equally well.

Deployment and Development Cycle
As I was building the app with Maven, I added a couple of lines to my POM and was able to say “mvn jetty:run” to get my Vaadin app up and running on my local box in a few seconds. My development cycle was only a few seconds between compile and interactive tests, as I was experimenting with the trial-and-error method.

TDD would be not only possible, but easy in this situation.

I successfully deployed my little Vaadin app to ServiceMix, my OSGi container of choice, without a hitch.

Performance appeared excellent overall, although I haven’t formally tested it with a load-testing tool (yet).

So far, I’m impressed with Vaadin. I’m more impressed with any web framework I’ve worked with in a number of years, in fact. I’m sure there are some warts in there somewhere, but for the benefits it brings to the table, I suspect they’re easily worth it. I think the advantages to teams that already speak fluent Java is hard to overstate, and the productivity to produce good-looking and functioning webapps is quite remarkable.

Over the next few weeks I’ll push it a bit harder with a complex example application, and see how it stacks up against other web app technologies I’ve worked with in a more realistic scenario.

By: Mike Nash

Specs and Selenium Together

August 25, 2009 1 comment

I recently had the chance to dive into a new project, this one with a rich web interface. In order to create acceptance tests around the (large and mostly untested) existing code, we’ve started writing specs acceptance tests.

Once we have our specs written to express what the existing functionality is, we can refactor and work on the codebase in more safety, our tests acting as a “motion detector” to let us know if we’ve broken something, while we write more detailed low-level tests (unit tests) to allow easier refactoring of smaller pieces of the application.

What’s interesting about our latest batch of specs is that they are written to express behaviours as experienced through a web browser – e.g. “when a user goes to this link and clicks this button on the page, he sees something happen”. In order to make this work we’ve paired up specs with Selenium, a well-known web testing framework.

By abstracting out the connection to Selenium into a parent Scala object, we can build a DSL-ish testing language that lets us say things like this:

object AUserChangesLanguages extends BaseSpecification {

  "a public user who visits the site" should beAbleTo {
    "Change their language to French" in {
      select("languageSelect", "value=fr")
      location must include("/fr/")
    "Change their language to German" in {
      select("languageSelect", "value=de")
      location must include("/de/")
    "Change their language to Polish" in {
      select("languageSelect", "value=pl")
      location must include("/pl/")

This code simply expresses that as a user selects a language from a drop-down of languages, the page should refresh (via some Javascript on the page) and redirect them to a new URL. The new URL contains the language code, so we can tell we’ve arrived at the right page by the “location must include…” line.

Simple and expressive, these tests can be run with any of your choice of browsers (e.g. Firefox, Safari, or, if you insist, Internet Explorer).

Of course, there’s lots more to testing web pages, and we’re fleshing out our DSL day by day as it needs to express more sophisticated interactions with the application.

We can get elements of the page (via Xpath), make assertions about their values, click on things, type things into fields and submit forms, basically all the operations a user might want to do with a web application.

There are some frustrations, of course. The Xpath implementation on different browsers works a bit differently – well, ok, to be fair, it works on all browsers except Internet Exploder, where it fails in various frustrating ways. We’re working on ways to overcome this that don’t involve having any “if browser == ” kind of logic.

It’s also necessary to start the Selenium RC server before running the specs, but a bit of Ant magic fixes this.

We’ve got these specs running on our TeamCity continuous integration server, using the TeamCity runner supplied with Specs, where we get nicely formatted reports as to what’s pending (e.g. not finished being written yet), what’s passing, and what’s failing.

The specs written with Selenium this way are also a bit slow, as they must actually wait in some cases for the browser (and the underlying app!) to catch up. When run with IE as the browser, they’re more than just a bit slow, in fact…

They are, however, gratifyingly black-box, as they don’t have any connection to the code of the running application at all. For that matter, the application under test can be written in any language at all, and in this case is a combination of J2EE/JSP and some .NET.

There’s a lot of promise in this type of testing, even with it’s occasional frustrations and limitations, and I suspect we’ll be doing a lot more of it.

By: Mike Nash

Scala Continuous Testing with sbt

July 27, 2009 Comments off

I’ve recently had occasion to start an open source project, and the correct tool for the job appears to be Scala.

So far the project is going well, but the pain has been around the build and IDE support for rapid and convenient development in Scala. Although all three of the major IDEs I’ve worked with recently (Eclipse, IntelliJ IDEA and Netbeans) have plugins for Scala, they are all early releases, and have various degrees of pain associated with them.

I ended up using Netbeans for editing and as a subversion client, then building with Maven when I wanted to compile and/or run tests. Calling Maven from within Netbeans to build a Scala project is still a bit creaky, so I was doing it from a terminal window directly.

This is very inconvenient, for a number of reasons. First, I’m working in a Behaviour-Driven Development mode, using specs as my BDD framework. This means I first write a specification in specs, see it fail, then write the code necessary to make it pass, then write (or extend) the next specification for the next behaviour I want, and so forth.

When I want to run a test, I had to flip to a command window, issue a Maven command to build and select the specified test to run, something like this:

mvn -Dtest=foo test

In order to make this work I had to declare my specs as JUnit tests (with the @Test annotation), even though they don’t use anything else from JUnit. This felt like a bit of a hack, albeit a useful one. Another pain point was the startup time for Maven (although I understand there’s a “console” plugin for Maven as well that can perhaps reduce this particular pain).

As I like to tinker with new stuff, I thought I’d make a departure from Maven and give sbt a try. Sbt is a build tool written in Scala that supports building both Scala and Java (and mixed) projects in a very simple way. Unlike Ant, there’s no up-front pain to write a build script, though, as sbt can make reasonable assumptions (which you can override) about where to find your classes and libraries, so you hit the ground running.

In literally seconds I was up and running after following the install instructions on the sbt site. After a bit of experimenting I found the “console” mode in sbt, where you launch sbt and leave it running.

Once in console mode you can either just type “test” every time you want to build and run all tests, or be more selective, and run only the tests that failed last time, or just a single specified test, if you’re working on just one feature. Any of these operations are fast – mostly because sbt is already loaded and running, but also because sbt does a bit less work then Maven does on every build.

Although sbt can be configured to work in conjunction with Ivy or Maven repositories, you can also just drop your dependency libs in to “lib” directory in your project. For open source this is rather nice, as it saves the user of the project the trouble of trying to find them. Even supplying a Maven pom that specifies the repositories from which to download your dependencies is not a guarantee, as repositories change over time. Many is the time I’ve gone to download a dependency (or rather, Maven has gone to do it for me), only to find it’s not where it used to be, is a different name or version, or some other problem causes my build to fail. Like Ant, sbt can avoid this problem by keeping dependencies locally. Unlike Ant, it can also go get the dependencies the first time for you from the same repositories Maven uses – perhaps giving you the best of both words in some situations.

Even more interesting was the command

~ test

Which runs all the tests, then waits for any source code to change (test or main code). When it sees a change, it runs all the tests again (after compiling the changes, of course). Poor mans continuous testing 🙂

Wait, it gets even awesomer! When you say

~ test SomeTest

sbt will wait for any changes, then run just the specified test. This is ideal when you know you’re only working on a specific set of functionality (and therefore affecting only a single test). When sbt is waiting, you can just hit any key to return to the interactive mode, so it’s easy to change it from one of these modes to another.

Other commands in sbt are also very familiar and quick, such as “compile”, which does exactly as you’d expect from the name. “Package” is another good one – it produces a jar artifact, just like the Maven command of the same name. I haven’t yet tried it’s deploy mechanisms properly, but early results look promising.

I also like the “console” command, which runs the Scala command-line console, but with your project on the classpath, along with all it’s dependencies. This lets you do ad-hoc statements quickly and easily, and see the results right away. When you’re not sure what’s going on with a failing spec, I’ve found this mode very helpful to experiment. Scala is such an expressive language, I can write a quick experiment in one or two lines of code, see the result (as the Scala console also evaluates expressions by default), and go back to coding and testing, all without re-starting sbt. Quite nice, and somewhat reminiscent of the similar functionality in Rails and “irb” (and JRuby’s equivilant, Jirb).

There are many other things I’ve found about sbt that I like so far, but those are topics for another post later on….

By: Mike Nash

Scala and Java Together

May 18, 2009 1 comment

Given a need to work towards more scalable systems, and to increase concurrency, many developers working in Java wonder what’s on the other side of the fence.

Languages such as Erlang and Haskell bring the functional programming style to the fore, as they essentially force you to think and write functionally – they simply don’t support (well, at least not easily) doing things in a non-functional fashion very easily.

The mental grinding of gears, however, can be considerable for developers coming from the Java (or C/C++/C#) worlds in many cases, who have become familiar with the Object-oriented paradigm. An excellent solution might be to consider the best of both worlds: Scala.

I had the opportunity this weekend to renew my acquaintance with Scala, and to work on blending it with the old mainstay, Java. Scala runs on the JVM, so as you might expect, it can be combined pretty readily with Java – more cleanly than before, given some new Maven and IDE plugins.

Details and screen shots here…

By: Mike Nash