Migrate Nextcloud PostgreSQL 11 to 17 in Docker container
This post looks at how to migrate a Postgres Docker image to a newer version. I will dump all databases contained in a container and migrate them all into a new container. The process outlined works for any Postgres container, but I discuss it in the context of my Nextcloud setup.
My Setup
I have Nextcloud running in containers, with the composition detailed in a
docker-compose.yml
file:
- Nextcloud
- PostgreSQL 11
- Redis
- Nginx (reverse proxy)
Postgres 11 was EOL for over a year, so it was time to migrate. Mostly to get new security patches. But newer version also incorporate new features, so a side effect might be (slightly) better performance.
Why Not Just Start a New Container Using the Old Data Dir?
To get the obvious out of the way, the naive approach of just running a newer Postgres image over the old data files will not work, because the data files are incompatible:
$ docker run --name pg-17 -e POSTGRES_PASSWORD=postgres \
> -v ./pg-11/data:/var/lib/postgresql/data postgres:17-alpine
PostgreSQL Database directory appears to contain a database; Skipping initialization
2025-01-18 07:51:34.219 UTC [1] FATAL: database files are incompatible with server
2025-01-18 07:51:34.219 UTC [1] DETAIL: The data directory was initialized by PostgreSQL version 11, which is not compatible with this version 17.2.
The Plan
So we need to do a data dump and import the data into the new container (but see
the pg_upgrade
alternative below). The plan is:
- Stop the Nextcloud service and all its dependencies except for the database
- Dump all databases into a file
- Stop the old Postgres container
- Spin up the new Postgres container
- Import the dumped databases into the new container
- Restart the Nextcloud service with its dependencies
1. Stop Nextcloud Service
$ docker compose stop nextcloud nextcloud-redis nextcloud-proxy
2. Dump All Databases Into a File
$ docker exec pg-11 pg_dumpall -c --if-exists > pg-11.dump
-c, --clean
of pg_dumpall
indicates that the output script will first insert
DROP
statements to DROP
existing database objects (roles, databases) before
inserting them. Combine this with --if-exists
to make them DROP <object> IF EXISTS
statements, reducing the number of errors when importing the data. None
of this should be necessary on a new database, though, but I just want to make
very sure all the required objects are copied as the stakes are high.
Also note that in a shell, redirection is performed before the command is
executed. In this case, the standard output is redirected to the file
pg-11.dump
before docker exec
is executed. So the redirection affects the
current shell and the file will be opened on the host, not the container.
3. Stop the Running Container
$ docker compose stop nextcloud-db
4. Start New Container
We need to make sure that the new data is stored in a new location. If you run
the container as non-root (which you should), create a data directory for the
new container and chown
it to the correct UID and GID used in the container
to run the postgres
executable:
$ mkdir -p pg-17/data
$ chown -R pguser:pggroup pg-17
$ chmod -R 0700 pg-17
If you do run the container as root, you only have to specify the host directory
in docker-compose.yml
(see below). The container, running as root, will be
able to create the data directory anywhere on your host and then chown
it to
the postgres
user that is running the postgres
command (by default
postgres
).
Edit your docker-compose.yml
(or adjust your -v, --volume
flag to the
docker run
command). It looked like this before:
services:
nextcloud-db:
image: postgres:11-alpine
container_name: pg-11
user: pguser:pggroup
volumes:
- ./pg-11/data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=AtriumRabidAntihero
We will change it to this:
services:
nextcloud-db:
image: postgres:17-alpine
container_name: pg-17
user: pguser:pggroup
volumes:
- ./pg-17/data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=AtriumRabidAntihero
Finally, start the new container:
$ docker compose up --force-recreate nextcloud-db
Unless you removed the old container, add --force-recreate
to pull the new
image from Docker Hub and run it instead of restarting to old container.
5. Import the Data Into New Container
Import from inside the container:
$ docker exec -i pg-17 psql -U postgres < pg-11.dump
Use the user with superuser privileges to do the import. I do not specify a
database name, so the default database, matching the user name, will be used.
That is postgres
in this case. It does not matter, as the output from
pg_dumpall
specifies to drop all databases and then to recreate them.
Notice that we can use the redirection in our host shell. When docker exec
is
executed, it reads from its standard input: the file pg-11.dump
. For this
input to reach psql
, we need to keep standard input open by specifying -i, --interactive
. If we don’t, psql
does not read any of our commands, sees
that its standard input is closed, and quits. Since we are not typing commands
but they come from a file, we do not need a prompt (pseudo-TTY) assigned (-t, --tty
).
Consider adding --set ON_ERROR_STOP=1
to the command to terminate the import
script immediately when an error occurs. This is convenient because the source
of the failure is immediately clear: it is the last attempted statement on the
screen.
Note you cannot import the dump from outside the container:
$ psql -h 127.0.0.1 -U postgres < pg-11.dump
This will prompt for your password every time the script needs to connect to a
new database. The first time it will succeed, but after the ALTER ROLE postgres
in the script is executed, all next authentication attempts will
fail, no matter how sure you were you typed it correctly. This is explained
under “Complications”.
Do a sanity check right after the migration:
postgres=# \l+ nextcloud
List of databases
Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges | Size | Tablespace | Description
-----------+-----------+----------+-----------------+------------+------------+--------+-----------+-----------------------+---------+------------+--------------------------------------------
nextcloud | oc_stefan | UTF8 | libc | en_US.utf8 | en_US.utf8 | | | | 206 MB | pg_default |
6. Restart Nextcloud
$ docker compose up
Complications
Inside the Container, You Can Connect to the Database Without Credentials
If you connect to your database from inside the Docker container, either using
the Unix socket in /var/run/postgresql
or from the localhost IP address
127.0.0.1
, you won’t have to enter a password. This does not mean your
configuration is wrong or the roles are not password-protected. This is because
the Postgres container by default trusts connections over these sockets, by
virtue of having added the following lines in
/var/usr/postgresql/pg_hba.conf
:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# ...snip...
host all all all scram-sha-256
This reads: allow any local user to connect as any PostgreSQL user, including
the database superuser. Connections from outside the container, however, should
authenticate using the scram-sha-256
authentication method.
Connection to the Database Denied Even Though The Credentials Are Correct
After the migration is complete, we can still log in from within the container (because no authentication is required), but connections from outside the container are denied:
$ psql -h 127.0.0.1 -U postgres
Password for user postgres:
psql: error: connection to server at "127.0.0.1", port 5432 failed: FATAL: password authentication failed for user "postgres"
This is because Postgres 11 used the md5
authentication method. This method
stores the result of hashing the password with the MD5 algorithm in the
database. You can see the hashes in the pg_authid
table:
postgres=# select rolname, rolpassword from pg_authid where rolname = 'postgres';
rolname | rolpassword
-----------------------------+-------------------------------------
postgres | md53175bce1d3201d16594cebf9d7eb3f9d
The default authentication mechanism for PostgreSQL 14+ is the (more secure)
scram-sha-256
. This authentication mechanism is defined in pg_hba.conf
(see
above).
What happens is that Postgres uses SCRAM-SHA-256 authentication when you
authenticate, but it cannot compare the output of that function with the hash
in the database, because that was hashed with MD5. (The opposite, when md5
is
set in pg_hba.conf
but the result of applying SCRAM-SHA-256 is stored in the
database does work, though, to make the transition from the old md5
method to
SCRAM-based authentication easier.)
To work around this, connect to the database with the superuser from within the container (bypassing authentication), verify that the new PBKDF is in use and set new passwords for all migrated users:
$ docker exec -it pg-17 psql -U postgres
postgres=# show password_encryption;
password_encryption
---------------------
scram-sha-256
(1 row)
postgres=# \password
Enter new password for user "postgres":
Enter it again:
This is Nextcloud-specific, but the application user for the Nextcloud
connection and its password can be found under
/var/www/html/config/config.php
.
postgres=# \password stefan
Enter new password for user "stefan":
Enter it again:
Obviously, as an alternative, you could also change the authentication method
in pg_hba.conf
to md5
and be done. This does nothing to improve security,
though, and I would not recommend since you are already need to take some
action to get it working.
Make sure you can still connect to the database:
$ psql -h 127.0.0.1 -U postgres -c 'SELECT version();'
pg_update
I am aware that pg_upgrade
exists and can be used with Docker containers.
I have not tried it, because it feels a bit more error-prone than dumping the
data and loading it in a new container.
To continue that tangent, there is a Docker image we could use to
pg_upgrade
without having to install any Postgres libraries and executables on
our host machines. I have not tried that as well, because I am not a big fan of
running “random” containers on my hosts.