Spring Basics: Wiring and Injecting Beans with Java Configuration

For new projects, Java configuration is preferred over XML-based configuration. For XML-based configuration, see a future blog post.

The code

You can find the code from this blog post on GitLab.

Dependencies

The only dependency you need to get a Spring container running is spring-context. Add in Maven:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>5.0.5.RELEASE</version>
</dependency>

Detecting beans by scanning for components

Create a class (which can have any name, here I chose AppConfig) and annotate it with @Configuration and @ComponentScan:

package com.relentlesscoding.wirebeans;

import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;

@Configuration
@ComponentScan
public class AppConfig {}

By default, the @ComponentScan will recursively scan the package in which the AppConfig class is declared. To change this, you can add a package to the value element (@ComponentScan("com.relentlesscoding.wirebeans.beans")) to scan only that package. If it troubles you that this is not type safe and hinders refactoring (it is a simple string after all), you can also pass Class objects to the basePackageClasses element. It will then scan the package that class is part of. Some people even recommend to create marker interfaces in each package for this purpose, but I think this clutters up the source code too much.

Three ways to declare beans

Now Spring is able to detect our beans. We can declare beans in three ways:

  • By annotating a class with @Component.
  • By annotating a method with a non-void return type with @Bean in a class annotated with @Configuration.
  • By annotating any method with a non-void return type with @Bean.

Automatic configuration with @Component

The simplest way to declare a bean is by annotating a class with @Component:

@Component
public class Running implements Habit {
    private final String name;
    private final String description;
    private final List<Streak> streaks;

    // accessors omitted for brevity
}

If we do not specify the value element of @Component, the id of the bean will be the lowercase name of the class, in this case running. We can use this id elsewhere to specify this particular bean, should ambiguities arise.

Explicit configuration in class annotated with @Configuration

If you have control over the beans you are creating, i.e. you are writing the source code, you would always go with automatic configuration by annotating your bean classes with @Component. If you are creating a bean for a class from a library, you can define your beans in your AppConfig class:

@Configuration
@ComponentScan
public class AppConfig {

    @Bean
    public List<Streak> streaks() {
        List<Streak> streaks = new ArrayList<>();
        streaks.add(new PositiveStreak(LocalDate.now()));
        return streaks;
    }

}

Here, we defined a bean of type List<Streak> that we can now inject into any other bean by using the @Autowired annotation (see below).

Lite Beans

Actually, we can declare any method with a non-void return type to be a @Bean. If we declare a bean outside of a configuration class, it will become a “lite bean”. Spring will still manage its lifecycle and scope and we can still autowire the bean into other beans, but when invoking the method directly, it will just be a plain-old Java method invocation without Spring magic. (Normally, Spring would create a proxy around the bean and all invocations would go through the Spring container. This would mean that by default only a single instance of the bean would exist, for example. In “lite” mode, however, the annotated method is just a factory method, and will happily instantiate a new object every time it is called.)

Read more about lite beans here.

Using the declared beans

To use Spring’s dependency injection, you have a couple of options, all of which involve annotating a method or field with @Autowired. By default, a matching bean of the specified type needs to exist in the Spring context or else Spring will throw an exception. To make the injection optional, set the required element of @Autowired to false.

Constructor injection

For mandatory dependencies, you should use constructor injection. “Mandatory” means the bean would not make sense without the bean on which it depends. For example, a HabitService persists Habits to the database. So a DAO or repository would be a mandatory dependency.

@Component
public class HabitService {
    private final HabitRepository habitRepository;

    @Autowired
    public Running(HabitRepository habitRepository) {
        this.habitRepository = habitRepository;
    }
    ...
}

Field injection

Field injection should not be preferred, because it makes testing (e.g. with mocks) harder to pull off. In the following example we inject a dependency into a private field. When we would try to mock the dependency, we would have to deal with the access restriction.

@Component
public class HabitService {
    @Autowired private HabitRepository habitRepository;
    ...
}

Setter injection

A third way to inject dependencies is through setter injection. Putting @Autowired on any method with one or more parameters will make Spring look for appropriate bean candidates in the Spring context.

@Component
public class Running implements Habit {
    private final String name = "Running";
    private final String description = "Run 10 km every day";
    private List<Streak> streaks;

    @Autowired
    public setStreaks(List<Streak> streaks) {
        this.streaks = streaks;
    }
    ...
}

Taking the application for a test run

package com.relentlesscoding.wirebeans;

import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringRunner;

@RunWith(SpringRunner.class)
@ContextConfiguration(classes = AppConfig.class)
public class HabitTest {

    @Autowired
    Habit runningHabit;

    @Test
    public void runningHabitIsNotNull() {
        Assert.assertNotNull(runningHabit);
    }

    @Test
    public void runningHabitHasSingleStreak() {
        Assert.assertEquals(1, runningHabit.getStreaks().size());
    }
}

We can specify the application context by using the @ContextConfiguration annotation and filling in the classes element. The JUnit 4 annotation @RunWith specifies the SpringRunner.class (which is a convenience extension of the longer SpringJUnit4ClassRunner).

@ContextConfiguration is part of the spring-test library:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-test</artifactId>
    <version>5.0.5.RELEASE</version>
    <scope>test</scope>
</dependency>

Use vidir to quickly edit filenames in your editor

If you have installed moreutils (see below), you can type vidir to open up the current working directory in your $EDITOR. You can use all the power of your editor to edit and/or delete filenames and directories. Editing a line will rename the file or directory, deleting a line will remove the file or directory.

The following will list all your JPEG pictures in the current directory in your editor:

$ vidir *.jpeg

vidir is not recursive by default: if you want to recursively edit filenames, you can do:

$ find -type f -name '*.jpeg' | vidir -  # take note of the trailing dash -

Deleting non-empty directories

When trying to delete a non-empty directory, vidir will complain:

/usr/bin/vidir: failed to remove ./non-empty-directory: Directory not empty

We can use find again:

$ ls -1 non-empty-dir
file1.txt
file2.txt
$ find | vidir -
1   ./non-empty-dir
2   ./non-empty-dir/file1.txt
3   ./non-empty-dir/file2.txt

When we delete all the files from the directory and the directory itself, the directory will be deleted.

To see what vidir is actually doing, you can pass it the -v or --verbose flag:

$ find | vidir -v -
removed './non-empty-dir/file2.txt'
removed './non-empty-dir/file1.txt'
removed './non-empty-dir'

How to install

In Arch Linux, you can install the moreutils package with sudo pacman -S moreutils. On Debian distros, you can run sudo apt install moreutils.

Use pandoc with Pygments to highlight source code

I am someone who has JavaScript disabled by default in his browser (I use uMatrix in Firefox for that). Only when I trust a site and I need to use functionality that truly depends on JavaScript will I turn it on. This hopefully protects me from most of the known and unknown bad stuff out there on the internet. It also makes me appreciate people who go through the trouble of making their webpages work without JavaScript.

Until recently, I used a JavaScript plugin on this blog to format source code. This bothered me, since using JavaScript just to display some source code seems like overkill and makes people have to turn on JavaScript in their browsers just to see the source code formatted nicely. I wanted to do better than that.

The way I normally write my blog posts is, I start with a Markdown article and then use pandoc to convert it to HTML which I then copy and paste into WordPress (if there is a better way to do this, please contact me). I noticed pandoc provides a switch --filter where you can specify a executable that can transform the pandoc output. The only problem is, you have to write a filter. Luckily, I found a GitHub gist that has already figured out how to write one. Here is some Haskell for you:

import Text.Pandoc.Definition
import Text.Pandoc.JSON (toJSONFilter)
import Text.Pandoc.Shared
import Data.Char(toLower)
import System.Process (readProcess)
import System.IO.Unsafe

main = toJSONFilter highlight

highlight :: Block -> Block
highlight (CodeBlock (_, options , _ ) code) = RawBlock (Format "html") (pygments code options)
highlight x = x

pygments:: String -> [String] -> String
pygments code options
         | (length options) == 1 = unsafePerformIO $ readProcess "pygmentize" ["-l", (map toLower (head options)),  "-f", "html"] code
         | (length options) == 2 = unsafePerformIO $ readProcess "pygmentize" ["-l", (map toLower (head options)), "-O linenos=inline",  "-f", "html"] code
         | otherwise = "<div class =\"highlight\"><pre>" ++ code ++ "</pre></div>"

Note that this program invokes another program, pygmentize to actually highlight the source code (pygmentize is part of the Pygments project). So, install pygmentize with your favorite package manager, install Haskell if you have not done so already, and then compile pygments.hs with:

$ ghc -dynamic pygments.hs

That’s it! Putting it all together, to create a blog post, I can now do:

$ pandoc -F pygments -f markdown -t html5 -o blogpost.html blogpost.md

I added some CSS that makes use of the Pygments classes and voilà: you can now view this blog without having to worry about a JavaScript cryptocurrency miner hijacking your CPU. You’re welcome.

Remove all files except a few in Bash

$ ls -1
153390909910_first
15339090991_second
15339090992_third
15339090993_fourth
15339090994_fifth
15339090995_sixth
15339090996_seventh
15339090997_eighth
15339090998_nineth
15339090999_tenth
15339091628_do_not_delete
root
root.sql

We want to delete all files that start with a timestamp (seconds since the epoch), except the newest file (15339091628_do_not_delete) and the files root and root.sql. The easiest way to do this, is enabling the shell option extglob (“extended globbing”), which allows us to use patterns to include or exclude files of operations:

$ shopt -s extglob
$ rm !(*do_not_delete|root*)

The last command will tell Bash to remove all files, except the ones that match either one of the patterns (everything ending with do_not_delete and everything starting with root). We delimite the patterns by using a pipe character |.

Other patterns that are supported by extglob include:

?(pattern-list)
      Matches zero or one occurrence of the given patterns

\*(pattern-list)
      Matches zero or more occurrences of the given patterns

+(pattern-list)
      Matches one or more occurrences of the given patterns

@(pattern-list)
      Matches one of the given patterns

!(pattern-list)
      Matches anything except one of the given patterns

To disable the extended globbing again:

$ shopt -u extglob

References

To read about all the options that extglob gives you, refer to man bash (search for Pathname Expansion). Searching for shopt in the same manual page will turn up all shell options. To see which shell options are currently enables for your shell, type shopt -p at the prompt.

tomcat7-maven-plugin: Invalid byte tag in constant pool: 19

I use tomcat7-maven-plugin to spin up a Tomcat 7 container where I can run my web application. When I added dependencies for log4j2 (version 2.11.0) to my project, I got the error:

org.apache.tomcat.util.bcel.classfile.ClassFormatException:
Invalid byte tag in constant pool: 19

Apparently, log4j2 is a multi-release jar and older versions of Tomcat can’t handle that. So I needed to upgrade my Tomcat maven plugin.

Solution: Update your Tomcat

But how do you update your Tomcat? Its information page shows that it hasn’t been updated for a while, the latest version being 2.2 which runs 7.0.47 by default. Maven Central, on the other hand, shows that the latest version at the moment of this writing is 7.0.86. That’s the version we want.

Change your pom.xml in the following way:

<project>
  <properties>
    <tomcat.version>7.0.86</tomcat.version>
  </properties>
  <plugins>
    <plugin>
      <plugin>
        <groupId>org.apache.tomcat.maven</groupId>
        <artifactId>tomcat7-maven-plugin</artifactId>
        <version>2.2</version>
        <configuration>
          <path>/</path>
          <port>7777</port>
        </configuration>
        <dependencies>
          <dependency>
            <groupId>org.apache.tomcat.embed</groupId>
            <artifactId>tomcat-embed-core</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-util</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-coyote</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jdbc</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-dbcp</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-servlet-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jsp-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jasper</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jasper-el</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-el-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-catalina</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-tribes</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-catalina-ha</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-annotations-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-juli</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat.embed</groupId>
            <artifactId>tomcat-embed-logging-juli</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat.embed</groupId>
            <artifactId>tomcat-embed-logging-log4j</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
        </dependencies>
      </plugin>
    </plugin>
  </plugins>
</project>

How to write a custom appender in log4j2?

/* package declaration, imports... */

@Plugin(name = "CustomListAppender",
        category = Core.CATEGORY_NAME,
        elementType = Appender.ELEMENT_TYPE,
        printObject = true)
public final class CustomListAppender extends AbstractAppender {

    // for storing the log events
    private List<LogEvent> events = new ArrayList<>();

    protected CustomListAppender(
            String name,
            Filter filter,
            Layout<? extends Serializable> layout,
            boolean ignoreExceptions) {
        super(name, filter, layout, ignoreExceptions);
    }

    @Override
    public void append(LogEvent event) {
        if (event instanceof MutableLogEvent) {
            events.add(((MutableLogEvent) event).createMemento());
        } else {
            events.add(event);
        }
    }

    public List<LogEvent> getEvents() {
        return events;
    }

    @PluginFactory
    public static CustomListAppender createAppender(
            @PluginAttribute("name") String name,
            @PluginElement("Layout") Layout<? extends Serializable> layout,
            @PluginElement("Filter") Filter filter) {
        if (name == null) {
            LOGGER.error("No name provided for TestLoggerAppender");
            return null;
        }

        if (layout == null) layout = PatternLayout.createDefaultLayout();

        return new CustomListAppender(name, filter, layout, true);
    }
}

Our CustomListAppender extends AbstractAppender, because that implements a lot of the methods from the Appender interface for us that we would otherwise have to implement ourselves.

The @Plugin annotation identifies this class a plugin that should be picked up by the PluginManager:

  • The name attribute defines the name of the appender that can be used in the configuration.
  • The category attribute should be "Core", because “Core plugins are those that are directly represented by an element in a configuration file, such as an Appender, Layout, Logger or Filter” (source). And we are creating an appender.
  • The elementType attribute defines which type of element in the Core category this plugin should be. In our case, "appender".
  • The printObject attribute defines whether our custom plugin class defines a useful toString() method. We do, because the AbstractAppender class we’re extending is taking care of that for us.

We implement the Appender#append(LogEvent) method to add each event to our events list. If the LogEvent happens to be mutable, we must take care to create an immutable copy of the event, otherwise subsequent log events will overwrite it (we will get a list of, say, three log events that are all referencing the same object). We also add a simple getter method to retrieve all log events.

For the PluginManager to create our custom plugin, it needs a way to instantiate it. log4j2 uses a factory method for that, indicated by the annotation @PluginFactory. An appender contains attributes, such as a name, and other elements, such as layouts and filters. To allow for these, we use the corresponding annotations @PluginAttribute to indicate that a parameter represents an attribute, and @PluginElement to indicate that a parameter represents an element.

To log errors that might occur during this setup, we can make use of the StatusLogger. This logger is available as LOGGER, and is defined in one of the parents of our custom plugin, AbstractLifeCycle. (The level of log messages that should be visible can be adjusted in the <Configuration status="warn" ...> element.)

Configuration

Configuration:
  packages: com.relentlesscoding.logging.plugins
  status: warn
  appenders:
    Console:
      name: STDOUT
    CustomListAppender:
      name: MyVeryOwnListAppender

  Loggers:
    logger:
      -
        name: com.relentlesscoding.logging
        level: info
        AppenderRef:
          ref: MyVeryOwnListAppender
    Root:
      level: error
      AppenderRef:
        ref: STDOUT

The packages attribute on the Configuration element indicates the package that should be scanned by the PluginManager for custom plugins during initialization.

How to use our custom list appender?

private CustomListAppender appender;

@Before
public void setupLogging() {
    LoggerContext context = LoggerContext.getContext(false);
    Configuration configuration = context.getConfiguration();
    appender = configuration.getAppender("MyVeryOwnListAppender");
    appender.getEvents().clear();
}

When we run tests now, we are able to see all logged events by calling appender.getEvents(). Before each test, we take care to clear the list of the previous log statements.

Unit test log4j2 log output

Sometimes you want to test if certain log output gets generated when certain events happen in your application. Here is how I unit test that using log4j2 (version 2.11.0).

Use LoggerContextRule to get to your ListAppender quickly

If you are using JUnit 4, then the quickest solution would be one that is used by log4j2 itself:

import org.apache.logging.log4j.junit.LoggerContextRule;
/* other imports */

public class LogEventTest {
    private static ListAppender appender;

    @ClassRule
    public static LoggerContextRule init = new LoggerContextRule("log4j2-test.yaml");

    @BeforeClass
    public static void setupLogging() {
        appender = init.getListAppender("List");
    }

    @Before
    public void clearAppender() {
        appender.clear();
    }

    @Test
    public void someMethodShouldLogAnError() {
        // setup test and invoke logic
        List<LogEvent> logEvents = appender.getEvents();
        List<String> errors = logEvents.stream()
                .filter(event -> event.getLevel().equals(Level.ERROR))
                .map(event -> event.getMessage().getFormattedMessage())
                .collect(Collectors.toList());

        // we logged at least one event of level error
        assertThat(errors.size(), is(greaterThanOrEqualTo(1)));

        // log event message should contain "wrong" for example
        assertThat(errors, everyItem(containsString("wrong")));
    }
}

The LoggerContextRule provides methods that come in handy while testing. Here we use the getListAppender(...) method to get access to an appender that uses a list to store all log events. Before each test, we clear the list, so we have a clean slate for new log events. The test invokes the code-under-test, requests the log events from the appender and filters them so that we only have the error log events left. Then we that at least one error log message was captured and that it contains the word “wrong”.

Use the LoggerContext

Instead of using the class rule (which lets you conveniently pass it the file name of the configuration), you could also use the LoggerContext:

@BeforeClass
public static void setupLogging() {
    LoggerContext context = LoggerContext.getContext(false);
    Logger logger = context.getLogger("com.relentlesscoding");
    appender = (ListAppender) logger.getAppenders().get("List");
}

This might be your only option if you are working with JUnit 5 (and eventually you will want to migrate to that). In JUnit 5 we can’t use LoggerContextRule anymore, because @Rules don’t longer exist (they were replaced with an extension mechanism that works differently and log4j2 doesn’t provide such an extension currently).

Create a working configuration

To get the examples working, we need to define an appender called “List” and a logger in our log4j2 configuration.

log4j2-test.yaml

Configuration:
  status: warn
  name: TestConfig
  appenders:
    Console:
      name: STDOUT
    List:
      name: List

  Loggers:
    logger:
      -
        name: com.relentlesscoding
        AppenderRef:
          ref: List
    Root:
      level: info AppenderRef:
        ref: STDOUT

This configuration will send log events occurring in the package com.relentlesscoding and sub-packages to the appender with the name “List” (which is of type ListAppender). This configuration is defined in YAML, but you can use XML, JSON or the properties format as well.

Maven dependencies

To get the LoggerContextRule in JUnit 4 working, you need the following:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.11.0</version>
    <type>test-jar</type>
</dependency>

To get the YAML log4j2 configuration working, you need:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.5</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.dataformat</groupId>
    <artifactId>jackson-dataformat-yaml</artifactId>
    <version>2.9.5</version>
</dependency>

JMockit fakes

If you are working with microservices, you may find that you are having a lot of dependencies on other services. If you want your tests to be autonomous, that is, running in isolation of those dependencies, you can either use a tool like WireMock, or you can use a mocking framework.

Recently, I came across JMockit, a mocking framework for Java. To mock external dependencies, you can use so-called “fakes”.

Say you use a service that knows the credentials of the customers of your e-store, and you make requests to this service by using a driver provided by this service, say CredentialsHttpService. In your development environment (and Jenkins), you don’t have access to a running service. A solution to this would be a fake implementation of CredentialsHttpService, where we would mock the methods that we actually call from our code.

public class CredentialsHttpService {
    ...
    public Optional<CustomerAccount> getCustomerAccount(String id) {
        // does HTTP request to service
    }
    ...
}

In our test code, we can now implement the fake:

public class ServiceTest {
    @Tested
    Service service;

    @Test
    void test() {
        new MockUp<CredentialsHttpService>() {
            final static AtomicInteger counter = new AtomicInteger();

            @Mock
            Optional<CustomerAccount> getCustomerAccount(String id) {
                return Optional.of(CustomerAccount.builder()
                                        .accountNumber(counter.incrementAndGet())
                                        .build());
            }
        }

        service.doFoo();  // eventually invokes our fake
                          // getCustomerAccount() implementation
    }
}

The class to be faked is the type parameter of the MockUp class. The fake CredentialsHttpService will only mock the methods that are annotated with @Mock. All other methods of the faked class will have their real implementation. Mocked methods are not restricted by access modifiers: the method may have a private, protected, package-private or public access modifier, as long as the method name and number of parameters and parameter types are the same.

There is a lot of other things that fakes can do in JMockit, see the documentation.

Bash’ magic space

What does the “magic space” do?

Given the following:

$ find -wholename '*/path/to/file' -print -quit
$ man rm
$ rm -fv !-2:2

In the last line, feedback would be appreciated to see if we are indeed going to delete the second argument of two commands back. If you set Bash’ so-called “magic space”, history expansion will take place right away after typing a space after !-2:2:

$ rm -fv '*/path/to/file'

How to enable the magic space?

Put the following in your ~/.inputrc:

$if Bash
    Space: magic-space
$endif

Start a new session, or use bind -f ~/.inputrc to put the changes in effect immediately.

Other ways to achieve the same

You could also enable shopt -s histverify, which will perform the history expansion and give you another opportunity to modify the command before executing it. This requires you to press enter, though.

Unit test Grails GORM’s formulas

The code for this post is part of my PomoTimer project and can be found on GitHub.

The domain class

We have a domain class Project that models a project that one is working on when doing a particular work session. It records the name of the project (“Writing a blog post”), a status (“active”, “completed”), a creation time, the total time spent on this project and the user the project belongs to.

The total time is a derived field: it calculates the time spent on the work sessions that belong to this project. It should only calculate the work sessions that have status “done”. Derived fields can be defined in Grails by so-called formulas.

package com.relentlesscoding.pomotimer

class Project {
    String name
    ProjectStatus status
    Date creationtime
    Integer totaltime
    User user

    static constraints = {
        name blank: false, nullable: false, maxSize: 100, unique: true
        status blank: true, nullable: false, display: true
        creationtime blank: false, nullable: false
        totaltime blank: false, nullable: false, min: 0
        user blank: false, nullable: false, display: true
    }

    static mapping = {
        totaltime formula: '(select ifnull(sum(select d.seconds from Duration d where ws.duration_id = d.id), 0) from Work_Session ws where ws.project_id = id and ws.status_id in (select st.id from Work_Session_Status st where st.name = \'done\'))'
    }

    String toString() { return name }
}

Testing the formula

So how do we test a formula? Initially, I tried to write integration tests (grails create-integration-test <className> that create a class with the grails.testing.mixin.Integration and grails.transaction.Rollback annotations), but these suffer from the limitation that each feature method starts a new transaction, and formula’s aren’t updated until after the transaction is committed (which is never because the transaction is always rolled back). This effectively makes it impossible for an integration test to check whether a formula does what it is supposed to do.

The solution is to write a test specification that extends HibernateSpec, which allows us to fully use Hibernate in our unit tests.

package com.relentlesscoding.pomotimer

import grails.test.hibernate.HibernateSpec

class ProjectTotaltimeSpec extends HibernateSpec {

    def 'adding a work session should increase total time'() {
        given: 'a new project'
        def projectStatus = new ProjectStatus(name: 'foo', description: 'bar')
        def user = new User(firstName: 'foo',
                            lastName: 'bar',
                            userName: 'baz',
                            email: 'q@w.com',
                            password: 'b' * 10)
        def project = new Project(name: 'a new project',
                                  status: projectStatus,
                                  creationtime: new Date(),
                                  totaltime: 0,
                                  user: user)

        expect: 'total time of the new project is 0 seconds'
        project.totaltime == 0

        when: 'adding a completed work session to the new project'
        def duration = new Duration(seconds: 987)
        def done = new WorkSessionStatus(name: 'done', description: 'done')
        new WorkSession(starttime: new Date(),
                        duration: duration,
                        project: project,
                        status: done
                .save(failOnError: true, flush: true)
        // NOTE: a refresh is required to update the domain object
        project.refresh()

        then: 'total time is equal to duration of completed work session'
        project.totaltime == 987
    }
}