Grokking Time Zones in Oracle’s DATE and TIMESTAMP

Table of Contents

Intro

When I was tasked with putting some data in BigQuery, Google Cloud’s data warehouse implementation, I had to look at the data we currently store in our local Oracle database. I found that we were using DATE for fields that store “moments in time”. Normally, I would expect Unix timestamps here, or some other data type that contains the time zone.

Naturally, I complained about this, but people were quick to point out that it doesn’t matter, because you should relate the DATE to the time zone of the database. Oracle, it turns out, is tracking two time zones: the time zone of the database itself:

SELECT dbtimezone
  FROM dual;
-- Europe/Amsterdam

and the time zone of the client session:

SELECT sessiontimezone
  FROM dual;
-- Europe/Moscow

(See here how to update these time zones.)

I was warned that Oracle might change the query results based on either of these time zones, meaning when you insert a data type related to dates or time, you might get back an altered representation based on the time zone of the database or the session of the client.

This did not put my mind at ease at all. So, let’s take a look at what some of the common types relating to date and time are in Oracle, and when we should be careful with the query results.

DATEs

A DATE is a calendar date with a wallclock time without a time zone. If you put 2020-02-20 09:11:43 in the database, you will always retrieve the same exact datetime, no matter whether you’re in Amsterdam or New York. (It is the equivalent of a java.time.LocalDateTime, if that’s your thing.)

SELECT sessiontimezone
  FROM dual;
-- Europe/Amsterdam

SELECT def_time
  FROM users
 WHERE username = 'foo@example.com'
-- 2020-02-20 09:04:50

ALTER SESSION SET time_zone = 'UTC';

SELECT def_time
  FROM users
 WHERE username = 'foo@example.com'
-- 2020-02-20 09:04:50

Plain TIMESTAMPs

A TIMESTAMP (without a time zone) is the same beast as a DATE, except that you can specify times up to a fraction of a second (a DATE needs to be rounded to the nearest second).

TIMESTAMP WITH LOCAL TIME ZONE

Say, you store the value 2020-02-20 09:11:43 +01:00. A client with a sessiontimezone of Europe/Amsterdam in the winter will retrieve this exact value whether or not they retrieve it from an Amsterdam, Sydney or New York database. Another client with a session time zone that is set to America/New_York, however, will get the value 2020-02-20 03:11:43 -05:00.

So: LOCAL means it adjusts the retrieved value to your session time zone.

ALTER SESSION SET time_zone = 'Europe/Moscow';  -- offset +03:00

SELECT to_char("ttz", 'yyyy-MM-dd HH24:MI:SS')
FROM (-- timestamp with local, New York time zone
      SELECT CAST(
              to_timestamp_tz('2020-02-20 09:11:43 -05:00',
                              'yyyy-MM-dd HH24:MI:SS TZH:TZM')
              AS TIMESTAMP WITH LOCAL TIME ZONE) AS "ttz"
      FROM dual);
-- 2020-02-20 17:11:43

See also this tutorial.

TIMESTAMP WITH TIME ZONE

This data type stores the time zone together with the datetime and displays it independently from the session time zone of the client.

If we store a value 2020-02-20 09:11:43 +01:00 in Amsterdam and copy this to a database in New York, this is the value that is returned to clients in both Amsterdam and New York regardless of their session time zone.

Conclusion

DATEs and plain TIMESTAMPs don’t store a time zone and queries will always return the exact same value that was inserted regardless of dbtimezone or sessiontimezone. This makes them safe choices if you somehow don’t care about time zones.

TIMESTAMP WITH LOCAL TIME ZONE and TIMESTAMP WITH TIME ZONE do store time zones. A TIMESTAMP WITH LOCAL TIME ZONE will be represented differently depending on your session time zone. A TIMESTAMP WITH TIME ZONE will not be represented differently; it will just include an offset.

You might now always want to use TIMESTAMP WITH TIME ZONE, but apparently, they have problems of their own.

Falling back to DATEs, however, is problematic because of Daylight Saving Time (DST) in the fall. When you store the current datetime in the fall, and you go through the dance of setting all clocks back from 3:00 to 2:00, you have no way of knowing at what exact point in time the row was inserted. This is bad if you rely on this for anything crucial, such as forensics (“so the hacker copied the entire database at 2:30, but was that before or after we reset our clocks?”).

Using Oracle DATEs in Java

As for me, having to deal with DATEs without an explicit time zone, this means that the application inserting/retrieving these datetimes will need to decide how to interpret them. In my Java application, I will map them to java.time.LocalDateTime, add a time zone and convert them to java.time.Instants:

Instant defTimeInstant = user.getDefTime()
        .atZone(ZoneId.of("Europe/Moscow"))
        .toInstant();

user.getDefTime() returns a LocalDateTime indicating when the user was created. We can give a LocalDateTime a time zone by invoking atZone(ZoneId) on it, returing a ZonedDateTime. Once we have converted the datetime to an actual point on the timeline (no longer being some kind of abstract datetime without a timezone), we can convert it to an Instant by invoking toInstant() on the ZonedDateTime.

Sources

Oracle documentation on datetime data types.

tomcat7-maven-plugin: Invalid byte tag in constant pool: 19

I use tomcat7-maven-plugin to spin up a Tomcat 7 container where I can run my web application. When I added dependencies for log4j2 (version 2.11.0) to my project, I got the error:

org.apache.tomcat.util.bcel.classfile.ClassFormatException:
Invalid byte tag in constant pool: 19

Apparently, log4j2 is a multi-release jar and older versions of Tomcat can’t handle that. So I needed to upgrade my Tomcat maven plugin.

Solution: Update your Tomcat

But how do you update your Tomcat? Its information page shows that it hasn’t been updated for a while, the latest version being 2.2 which runs 7.0.47 by default. Maven Central, on the other hand, shows that the latest version at the moment of this writing is 7.0.86. That’s the version we want.

Change your pom.xml in the following way:

<project>
  <properties>
    <tomcat.version>7.0.86</tomcat.version>
  </properties>
  <plugins>
    <plugin>
      <plugin>
        <groupId>org.apache.tomcat.maven</groupId>
        <artifactId>tomcat7-maven-plugin</artifactId>
        <version>2.2</version>
        <configuration>
          <path>/</path>
          <port>7777</port>
        </configuration>
        <dependencies>
          <dependency>
            <groupId>org.apache.tomcat.embed</groupId>
            <artifactId>tomcat-embed-core</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-util</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-coyote</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jdbc</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-dbcp</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-servlet-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jsp-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jasper</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-jasper-el</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-el-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-catalina</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-tribes</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-catalina-ha</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-annotations-api</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat</groupId>
            <artifactId>tomcat-juli</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat.embed</groupId>
            <artifactId>tomcat-embed-logging-juli</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
          <dependency>
            <groupId>org.apache.tomcat.embed</groupId>
            <artifactId>tomcat-embed-logging-log4j</artifactId>
            <version>${tomcat.version}</version>
          </dependency>
        </dependencies>
      </plugin>
    </plugin>
  </plugins>
</project>

How to write a custom appender in log4j2?

/* package declaration, imports... */

@Plugin(name = "CustomListAppender",
        category = Core.CATEGORY_NAME,
        elementType = Appender.ELEMENT_TYPE,
        printObject = true)
public final class CustomListAppender extends AbstractAppender {

    // for storing the log events
    private List<LogEvent> events = new ArrayList<>();

    protected CustomListAppender(
            String name,
            Filter filter,
            Layout<? extends Serializable> layout,
            boolean ignoreExceptions) {
        super(name, filter, layout, ignoreExceptions);
    }

    @Override
    public void append(LogEvent event) {
        if (event instanceof MutableLogEvent) {
            events.add(((MutableLogEvent) event).createMemento());
        } else {
            events.add(event);
        }
    }

    public List<LogEvent> getEvents() {
        return events;
    }

    @PluginFactory
    public static CustomListAppender createAppender(
            @PluginAttribute("name") String name,
            @PluginElement("Layout") Layout<? extends Serializable> layout,
            @PluginElement("Filter") Filter filter) {
        if (name == null) {
            LOGGER.error("No name provided for TestLoggerAppender");
            return null;
        }

        if (layout == null) layout = PatternLayout.createDefaultLayout();

        return new CustomListAppender(name, filter, layout, true);
    }
}

Our CustomListAppender extends AbstractAppender, because that implements a lot of the methods from the Appender interface for us that we would otherwise have to implement ourselves.

The @Plugin annotation identifies this class a plugin that should be picked up by the PluginManager:

  • The name attribute defines the name of the appender that can be used in the configuration.
  • The category attribute should be "Core", because “Core plugins are those that are directly represented by an element in a configuration file, such as an Appender, Layout, Logger or Filter” (source). And we are creating an appender.
  • The elementType attribute defines which type of element in the Core category this plugin should be. In our case, "appender".
  • The printObject attribute defines whether our custom plugin class defines a useful toString() method. We do, because the AbstractAppender class we’re extending is taking care of that for us.

We implement the Appender#append(LogEvent) method to add each event to our events list. If the LogEvent happens to be mutable, we must take care to create an immutable copy of the event, otherwise subsequent log events will overwrite it (we will get a list of, say, three log events that are all referencing the same object). We also add a simple getter method to retrieve all log events.

For the PluginManager to create our custom plugin, it needs a way to instantiate it. log4j2 uses a factory method for that, indicated by the annotation @PluginFactory. An appender contains attributes, such as a name, and other elements, such as layouts and filters. To allow for these, we use the corresponding annotations @PluginAttribute to indicate that a parameter represents an attribute, and @PluginElement to indicate that a parameter represents an element.

To log errors that might occur during this setup, we can make use of the StatusLogger. This logger is available as LOGGER, and is defined in one of the parents of our custom plugin, AbstractLifeCycle. (The level of log messages that should be visible can be adjusted in the <Configuration status="warn" ...> element.)

Configuration

Configuration:
  packages: com.relentlesscoding.logging.plugins
  status: warn
  appenders:
    Console:
      name: STDOUT
    CustomListAppender:
      name: MyVeryOwnListAppender

  Loggers:
    logger:
      -
        name: com.relentlesscoding.logging
        level: info
        AppenderRef:
          ref: MyVeryOwnListAppender
    Root:
      level: error
      AppenderRef:
        ref: STDOUT

The packages attribute on the Configuration element indicates the package that should be scanned by the PluginManager for custom plugins during initialization.

How to use our custom list appender?

private CustomListAppender appender;

@Before
public void setupLogging() {
    LoggerContext context = LoggerContext.getContext(false);
    Configuration configuration = context.getConfiguration();
    appender = configuration.getAppender("MyVeryOwnListAppender");
    appender.getEvents().clear();
}

When we run tests now, we are able to see all logged events by calling appender.getEvents(). Before each test, we take care to clear the list of the previous log statements.

Unit test log4j2 log output

Sometimes you want to test if certain log output gets generated when certain events happen in your application. Here is how I unit test that using log4j2 (version 2.11.0).

Use LoggerContextRule to get to your ListAppender quickly

If you are using JUnit 4, then the quickest solution would be one that is used by log4j2 itself:

import org.apache.logging.log4j.junit.LoggerContextRule;
/* other imports */

public class LogEventTest {
    private static ListAppender appender;

    @ClassRule
    public static LoggerContextRule init = new LoggerContextRule("log4j2-test.yaml");

    @BeforeClass
    public static void setupLogging() {
        appender = init.getListAppender("List");
    }

    @Before
    public void clearAppender() {
        appender.clear();
    }

    @Test
    public void someMethodShouldLogAnError() {
        // setup test and invoke logic
        List<LogEvent> logEvents = appender.getEvents();
        List<String> errors = logEvents.stream()
                .filter(event -> event.getLevel().equals(Level.ERROR))
                .map(event -> event.getMessage().getFormattedMessage())
                .collect(Collectors.toList());

        // we logged at least one event of level error
        assertThat(errors.size(), is(greaterThanOrEqualTo(1)));

        // log event message should contain "wrong" for example
        assertThat(errors, everyItem(containsString("wrong")));
    }
}

The LoggerContextRule provides methods that come in handy while testing. Here we use the getListAppender(...) method to get access to an appender that uses a list to store all log events. Before each test, we clear the list, so we have a clean slate for new log events. The test invokes the code-under-test, requests the log events from the appender and filters them so that we only have the error log events left. Then we that at least one error log message was captured and that it contains the word “wrong”.

Use the LoggerContext

Instead of using the class rule (which lets you conveniently pass it the file name of the configuration), you could also use the LoggerContext:

@BeforeClass
public static void setupLogging() {
    LoggerContext context = LoggerContext.getContext(false);
    Logger logger = context.getLogger("com.relentlesscoding");
    appender = (ListAppender) logger.getAppenders().get("List");
}

This might be your only option if you are working with JUnit 5 (and eventually you will want to migrate to that). In JUnit 5 we can’t use LoggerContextRule anymore, because @Rules don’t longer exist (they were replaced with an extension mechanism that works differently and log4j2 doesn’t provide such an extension currently).

Create a working configuration

To get the examples working, we need to define an appender called “List” and a logger in our log4j2 configuration.

log4j2-test.yaml

Configuration:
  status: warn
  name: TestConfig
  appenders:
    Console:
      name: STDOUT
    List:
      name: List

  Loggers:
    logger:
      -
        name: com.relentlesscoding
        AppenderRef:
          ref: List
    Root:
      level: info AppenderRef:
        ref: STDOUT

This configuration will send log events occurring in the package com.relentlesscoding and sub-packages to the appender with the name “List” (which is of type ListAppender). This configuration is defined in YAML, but you can use XML, JSON or the properties format as well.

Maven dependencies

To get the LoggerContextRule in JUnit 4 working, you need the following:

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.11.0</version>
    <type>test-jar</type>
</dependency>

To get the YAML log4j2 configuration working, you need:

<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.5</version>
</dependency>
<dependency>
    <groupId>com.fasterxml.jackson.dataformat</groupId>
    <artifactId>jackson-dataformat-yaml</artifactId>
    <version>2.9.5</version>
</dependency>

JMockit fakes

If you are working with microservices, you may find that you are having a lot of dependencies on other services. If you want your tests to be autonomous, that is, running in isolation of those dependencies, you can either use a tool like WireMock, or you can use a mocking framework.

Recently, I came across JMockit, a mocking framework for Java. To mock external dependencies, you can use so-called “fakes”.

Say you use a service that knows the credentials of the customers of your e-store, and you make requests to this service by using a driver provided by this service, say CredentialsHttpService. In your development environment (and Jenkins), you don’t have access to a running service. A solution to this would be a fake implementation of CredentialsHttpService, where we would mock the methods that we actually call from our code.

public class CredentialsHttpService {
    ...
    public Optional<CustomerAccount> getCustomerAccount(String id) {
        // does HTTP request to service
    }
    ...
}

In our test code, we can now implement the fake:

public class ServiceTest {
    @Tested
    Service service;

    @Test
    void test() {
        new MockUp<CredentialsHttpService>() {
            final static AtomicInteger counter = new AtomicInteger();

            @Mock
            Optional<CustomerAccount> getCustomerAccount(String id) {
                return Optional.of(CustomerAccount.builder()
                                        .accountNumber(counter.incrementAndGet())
                                        .build());
            }
        }

        service.doFoo();  // eventually invokes our fake
                          // getCustomerAccount() implementation
    }
}

The class to be faked is the type parameter of the MockUp class. The fake CredentialsHttpService will only mock the methods that are annotated with @Mock. All other methods of the faked class will have their real implementation. Mocked methods are not restricted by access modifiers: the method may have a private, protected, package-private or public access modifier, as long as the method name and number of parameters and parameter types are the same.

There is a lot of other things that fakes can do in JMockit, see the documentation.

Import contacts (vCards) into Nextcloud

TL;DR

Export your contacts from Google in vCard version 3 format, split the contacts file and use cadaver to upload all files individually to your address book.

The struggle

Last week, I did a fresh install of Lingeage OS 14.1 on my OnePlus X and decided not to install any GApps. I have been slowly moving away from using Google services and, having found replacements in the form of open-source apps or web interfaces, I felt confident I would be able to use my phone without a Google Play Store or Play Services. (F-Droid is now my sole source of apps.)

To tackle the problem of storing contacts and a calendar that could be synced, I installed a Nextcloud instance on a Raspberry Pi 3. Having installed DAVdroid, I got my phone to sync contacts with Nextcloud, but not all of them: it would stop synchronizing after some 120 contacts, while I had more than 400.

I decided to try a different approach, so I exported the contacts on my phone in vCard format and tried to upload them to Nextcloud using the aptly named application "contacts" for this. However, this also failed unexpectedly. I’m using Nextcloud version 12.0.3 and version 2.0.1 of the contact app, but it refuses to accept vCard version 2.1 (HTTP response code 415: Unsupported media type). This, naturally, is the version Android 6 uses to export contacts.

After some searching, I found out that if you go to contacts.google.com, you can download your contacts in vCards version 3. Problem fixed? Well, not so fast: importing 400+ contacts into Nextcloud using the web interface on a Raspberry Pi 3 with an SD card for storage will take a long time. In fact, it never finished over the course of a couple of hours (!), so I needed yet another approach.

Fortunately, you can approach your Nextcloud instance through the WebDAV protocol using tools such as cadaver:

$ cadaver https://192.168.1.14/nextcloud/remote.php/dav

Storing your credentials in a .netrc file in your home directory will enable cadaver to verify your identity without prompting, making it suitable for scripting:

machine 192.168.1.14
login foo
password correcthorsebatterystaple

cadaver allows you to traverse the directories of the remote file system over WebDAV. To put a single local contacts file (from your working machine) to the remote Raspberry Pi, you could tell it to:

dav:/nextcloud/remote.php/dav/> cd addressbooks/users/{username}/{addressbookname}
dav:/nextcloud/remote.php/dav/addressbooks/users/foo/Contacts/> put /home/foo/all.vcf all.vcf

I had a single vcf file with 400+ contacts in them, but after uploading it this way, only a single contact was being displayed. Apparently, the Nextcloud’s contacts app assumes a single vcf file contains only a single contact. New challenge: we need to split this single vcf file containing multiple contacts into separate files that we can then upload to Nextcloud.

To split the contacts, we can use awk:

BEGIN {
    RS="END:VCARD\r?\n"
    FS="\n"
}
{
    command = "echo -n $(pwgen 20 1).vcf"
    command | getline filename
    close(command)
    print $0 "END:VCARD" > filename
}

This separates the contacts on the record separator END:VCARD and generates a random filename to store the individual contact in. (I also wrote a Java program to do the same thing, which is faster when splitting large files).

Obviously, it would be convenient now if we could upload all these files in one go. cadaver does provides the mput action to do so, but I did not get it to work with wildcards. So instead, I created a file with put commands:

for file in *.vcf; do
    echo "put $(pwd)/$file addressbooks/users/foo/Contacts/$file" >> commands
done

And then provided this as input to cadaver:

$ cadaver http://192.168.1.14/nextcloud/remote.php/dav <<< $(cat commands)

This may take a while (it took around an hour for 400+ contacts), but at least you get to see the progress as each request is made and processed. And voilà, all the contacts are displayed correctly in Nextcloud.

How to ignore an invalid SSL certificate in Java?

Sometimes during development it is useful to use a certificate whose CN (Common Name) does not match the host name in the URL, for example localhost. In these cases Java will throw a SSLHandshakeException. How can we easily disable certificate checking for localhost and other domains of our choosing?

Your own certificate checking

The easiest way is to create your own class that implements the interface HostnameVerifier:

WhitelistHostnameVerifier.java

package com.relentlesscoding.https;

import javax.net.ssl.HostnameVerifier;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLSession;
import java.util.HashSet;
import java.util.Set;

// singleton
enum WhitelistHostnameVerifier implements HostnameVerifier {
    // these hosts get whitelisted
    INSTANCE("localhost", "sub.domain.com");

    private Set whitelist = new HashSet<>();
    private HostnameVerifier defaultHostnameVerifier =
            HttpsURLConnection.getDefaultHostnameVerifier();

    WhitelistHostnameVerifier(String... hostnames) {
        for (String hostname : hostnames) {
            whitelist.add(hostname);
        }
    }

    @Override
    public boolean verify(String host, SSLSession session) {
        if (whitelist.contains(host)) {
            return true;
        }
        // important: use default verifier for all other hosts
        return defaultHostnameVerifier.verify(host, session);
    }
}

Main.java

// use our HostnameVerifier for all future connections
HttpsURLConnection.setDefaultHostnameVerifier(
        WhitelistHostnameVerifier.INSTANCE);
final String url = "https://relentlesscoding.com";
HttpsURLConnection conn =
        (HttpsURLConnection) new URL(url).openConnection();
System.out.println(conn.getResponseCode());

HttpsURLConnection.DefaultHostnameVerifier#verify always returns false, so we might as well return false ourselves, but this way the code will keep working when the implementation of HttpsURLConnection changes.

Note: you can set the default HostnameVerifier globally by using the static method setDefaultHostnameVerifier, or you can set it per instance, using conn.setHostnameVerifier.

Problems with Let’s Encrypt certificates

One thing I found out while testing this, was that OpenJDK version 1.8.0_131 won’t correctly handle Let’s Encrypt certificates:

java.io.IOException: HTTPS hostname wrong: should be <relentlesscoding.com>

This is due to the fact that Let’s Encrypt certificates are not present in the Java installation. Upgrading to version 1.8.0_141 resolves this (for Oracle versions >= 8U101 or >= 7U111).

Whitelisting, NOT disabling certificate checking

The best approach here is whitelisting. This will keep the certificate checking in place for most sites, and will only disable it for pre-approved hosts. I saw quite a bit of examples online that disable certificate checking completely, by writing the verify method as follows:

public boolean verify(String hostname, SSLSession session) {
    return true;
}

This is not secure, as it completely ignores all invalid certificates.

Java stateful sessions or: how to properly send cookies with each redirect request

The problem

I needed to login to some webpage programmatically. It happened to be one of those pages where, when the login succeeds, the user is redirected to another page, and then to another (something along the lines of ‘Please login’ -> ‘You are successfully logged in’ -> ‘Admin panel’). So I needed to write something that would store the cookies that come along with each response, and send those cookies out with each subsequent request. With curl, I would have used:

$ curl --location --cookie-jar logincookie 'https://login.securepage.com'

So how can we mock this behavior in Java?

Java SE’s CookieHandler

A simple DuckDuckGo search will show Java SE’s own java.net.CookieHandler: pretty neat, a built-in solution! The CookieHandler, as the name suggests, will handle all your floating-point arithmetic. Just kidding, it will serve as the object where the HTTP protocol handler (used by objects such as URL) will go to to check for relevant cookies. CookieHandler is a global abstract class that will handle the state management of all requests.

CookieManager cookieManager = new CookieManager();
cookieManager.setCookiePolicy(CookiePolicy.ACCEPT_ALL);
CookieHandler.setDefault(cookieManager);

CookieManager’s task is to decide which cookies to accept and which to reject. CookiePolicy.ACCEPT_ALL will accept all cookies. Other choices include CorkiePolicy.ACCEPT_NONE (no cookies will be accepted) and CookiePolicy.ACCEPT_ORIGINAL_SERVER (only cookies from the original server will be accepted).

If you want, you can also create a CookieManager with a specific CookieStore and specify exactly where your cookies will be stored. This is needed, for example, to create persistent cookies that can be restored after a JVM restart. By default, CookieManager will create an in-memory cookie store.

To install our own system-wide cookie handler, we invoke CookieHandler.setDefault and pass our CookieManager as an argument.

The tutorial on the Oracle website does a pretty decent job of explaining everything in more detail.

If this works for you, great! If not, keep on reading.

Apache HttpComponent Client to the rescue

Unfortunately, the steps above did not work for me as the server kept complaining that the session was invalid after the redirects. After reading somewhere on StackOverflow that Java SE’s version of the CookieStore was buggy, and me being on a deadline, I decided to turn my attention to Apache and their HttpComponent Client (version 4.5.3):

import org.apache.http.client.CookieStore;
import org.apache.http.client.config.CookieSpecs;
import org.apache.http.client.config.RequestConfig;
// ... other imports

class Downloader {
    private CookieStore cookieStore = new BasicCookieStore();

    private void doLogin() {
        // CookieSpecs.STANDARD is the RFC 6265 compliant policy
        RequestConfig requestConfig = RequestConfig
                .custom()
                .setCookieSpec(CookieSpecs.STANDARD)
                .build();

        // automatically follow redirects
        CloseableHttpClient client = HttpClients
                .custom()
                .setRedirectStrategy(new LaxRedirectStrategy())
                .setDefaultRequestConfig(requestConfig)
                .setDefaultCookieStore(cookieStore)
                .build();

        String addr = "https://login.securehost.com";
        HttpPost post = new HttpPost();

        // login and password as payload for POST request
        List<NameValuePair> urlParams = new ArrayList<>();
        urlParams.add(new BasicNameValuePair("login", "neftas"));
        urlParams.add(new BasicNameValuePair("password", "letmein"));
        post.setEntity(new UrlEncodedFormEntity(urlParams));

        HttpResponse response = client.execute(post);
        // ... and we are logged in!
    }
}

(To keep the code clean, I’ve not bothered with any exception handling.)

The RequestConfig serves to indicate what kind of cookie policy we want (there are several, unfortunately). If you just want things to work, you can start with CookieSpecs.STANDARD, which is compliant with “a more relaxed profile defined by RFC 6265”.

During the creation of the HttpClient, we specify that we want to follow all redirects (setRedirectStrategy(new LaxRedirectStrategy())), we pass the cookie policy we defined earlier in the RequestConfig, and lastly, we pass the cookie store.