newspaint

Documenting Problems That Were Difficult To Find The Answer To

Installing ZFS on LUKS on Ubuntu 16.04 on Hetzner Dedicated Server

Hetzner, in Germany, offer dedicated servers on auction.

My goal was to set up an Ubuntu 16.04 server with ECC (error correcting code) memory and two hard drives in a mirror arrangement running ZFS.

I wanted the hard drives to be encrypted using LUKS (Linux unified key setup) (with the exception of the boot partition).

The setup would look like this:

 ____________     _____________    __________
|            |   |             |  |          |
| rpool/ROOT |   | (zvol swap) |  | mounted  |
|   ZFS fs   |   |             |  |   /boot  |
|____________|   |_____________|  |__________|
 _____|_________________|______         |
|                              |        |
|     zpool mirror "rpool"     |        |
|______________________________|        |
 _____|______     _____|______          |
|            |   |            |         |
| LUKS crypt |   | LUKS crypt |         |
| /dev/mapper|   | /dev/mapper|         |
|     /crypt1|   |     /crypt2|         |
|____________|   |____________|         |
 _____|______     _____|______     _____|______
|            |   |            |   |            |
| partition  |   | partition  |   | partition  |
| /dev/sda2  |   | /dev/sdb2  |   | /dev/sda1  |
|____________|   |____________|   |____________|

The Hetzner dedicated server I tried this on did not have built-in KVM – so it was necessary to find a method of allowing LUKS encrypted drives to be unlocked/opened prior to booting Linux – and the solution appeared to put dropbear in initramfs so one could SSH during the boot phase for the purpose of unlocking the partitions prior to mounting the root ZFS filesystem.

Before you do this please note – you will require some files from another Xenial (Ubuntu 16.04) distribution! See bootstrap for more information.

Start by booting into the Rescue Linux image (which happens to be Jessie Debian as of this article).

Enable Rescue Mode from Hetzner Control Panel

Enable Rescue Mode from Hetzner Control Panel

After enabling Rescue Mode from the control panel you can then select “Reset” from the control panel to boot into the Rescue image.

Health Warning

This process took me 12 hours to iron out issues I had. It wasn’t helped by the fact that I didn’t have a KVM (until near the end) to indicate what was going wrong during boot, and when I finally did get a KVM attached for an hour most keys I typed were duplicated by the KVM which made any kind of debugging almost impossible. Writing this article has taken me 6 additional hours.

It should be more straight forward now I’ve documented the process but you may find yourself tripped up by something small which is annoyingly difficult to diagnose blind as you are across the Internet.

Every Time You Boot Into Rescue Mode

You’re going to have to enable ZFS which isn’t part of the Rescue image.

This is agonisingly slow taking around 6-10 minutes, it can be slightly sped up (down to 3 minutes) by removing whatever kernel image has been installed but isn’t being currently used, e.g.:

-------------------------------------------------------------------

  Welcome to the Hetzner Rescue System.

  This Rescue System is based on Debian 8.0 (jessie) with a newer
  kernel. You can install software as in a normal system.

  To install a new operating system from one of our prebuilt
  images, run 'installimage' and follow the instructions.

  More information at http://wiki.hetzner.de

-------------------------------------------------------------------

Hardware data:

Network data:

root@rescue ~ # cat /proc/version
Linux version 4.10.16 (build@build.rzse.hetzner.de) (gcc version 4.9.2 (Debian 4.9.2-10) ) #25 SMP Tue May 16 12:37:37 CEST 2017

root@rescue ~ # apt-get remove -y linux-image-4.12.4 linux-headers-4.12.4
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  linux-headers-4.12.4 linux-image-4.12.4
0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.
After this operation, 121 MB disk space will be freed.
(Reading database ... 76476 files and directories currently installed.)
Removing linux-headers-4.12.4 (4.12.4-6) ...
Removing linux-image-4.12.4 (4.12.4-6) ...

root@rescue ~ # time apt-get install -t jessie-backports zfs-dkms
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  dh-python dkms libmpdec2 libnvpair1linux libpython3-stdlib
  libpython3.4-minimal libpython3.4-stdlib libuutil1linux libzfs2linux
  libzpool2linux python3 python3-minimal python3.4 python3.4-minimal spl
  spl-dkms sudo zfs-zed zfsutils-linux
Suggested packages:
  python3-apport menu python3-doc python3-tk python3-venv python3.4-venv
  python3.4-doc binfmt-support nfs-kernel-server samba-common-bin
  zfs-initramfs zfs-dracut
The following NEW packages will be installed:
  dh-python dkms libmpdec2 libnvpair1linux libpython3-stdlib
  libpython3.4-minimal libpython3.4-stdlib libuutil1linux libzfs2linux
  libzpool2linux python3 python3-minimal python3.4 python3.4-minimal spl
  spl-dkms sudo zfs-dkms zfs-zed zfsutils-linux
0 upgraded, 20 newly installed, 0 to remove and 84 not upgraded.
Need to get 8,085 kB of archives.
After this operation, 37.4 MB of additional disk space will be used.

real    2m48.082s
user    2m56.348s
sys     0m18.100s
root@rescue ~ #

Great! Now you can use ZFS in the rescue system.

Partitioning Hard Drives

I wanted 3 partitions on each hard drive. One for /boot (unencrypted), the largest for my ZFS root which will be encrypted, and the final (which would be put at the front of the disk) for a GRUB BIOS partition.

The disk would be partitioned as follows:

 ___________ ___________ ___________
|           |           |           |
| /dev/sda3 | /dev/sda1 | /dev/sda2 |
| GRUB BIOS |   /boot   |   LUKS    |
|     1 GiB |     4 GiB | remainder |
|___________|___________|___________|

I used parted to create these partitions:

root@rescue ~ # parted /dev/sda mklabel gpt # make disk GPT
root@rescue ~ # parted /dev/sda mkpart myboot 1GiB 5GiB
root@rescue ~ # parted /dev/sda mkpart mycrypt 5GiB 100%
root@rescue ~ # parted /dev/sda mkpart mygrub 2048s 1GiB
root@rescue ~ # parted /dev/sda set 3 bios_grub on
root@rescue ~ # parted /dev/sda align-check opt 1 # ensure optimal
root@rescue ~ # parted /dev/sda align-check opt 2 # ensure optimal
root@rescue ~ # parted /dev/sda print # review

Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 3      1049kB  1074MB  1073MB               mygrub   bios_grub
 1      1074MB  5369MB  4295MB               myboot
 2      5369MB  3001GB  2995GB               mycrypt

root@rescue ~ # parted /dev/sdb mklabel gpt # make disk GPT
root@rescue ~ # parted /dev/sdb mkpart myboot 1GiB 5GiB
root@rescue ~ # parted /dev/sdb mkpart mycrypt 5GiB 100%
root@rescue ~ # parted /dev/sdb mkpart mygrub 2048s 1GiB
root@rescue ~ # parted /dev/sdb set 3 bios_grub on
root@rescue ~ # parted /dev/sdb align-check opt 1 # ensure optimal
root@rescue ~ # parted /dev/sdb align-check opt 2 # ensure optimal
root@rescue ~ # parted /dev/sdb print # review

Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name     Flags
 3      1049kB  1074MB  1073MB               mygrub   bios_grub
 1      1074MB  5369MB  4295MB               myboot
 2      5369MB  3001GB  2995GB               mycrypt

root@rescue ~ #

Encrypting Paritions

The Linux root directory will be put on ZFS which will be mirrored off the /dev/sda2 and /dev/sdb2 partitions. So these partitions need to be encrypted:

root@rescue ~ # cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 /dev/sda2
WARNING!
========
This will overwrite data on /dev/sda2 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase: 
Verify passphrase: 
root@rescue ~ # cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 /dev/sdb2
WARNING!
========
This will overwrite data on /dev/sdb2 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase: 
Verify passphrase: 
root@rescue ~ #

Then opened:

root@rescue ~ # cryptsetup luksOpen /dev/sda2 crypt1
Enter passphrase for /dev/sda2:
root@rescue ~ # cryptsetup luksOpen /dev/sdb2 crypt2
Enter passphrase for /dev/sdb2:
root@rescue ~ #

Setting up ZFS on Encrypted Partitions

First create a ZFS pool, titled “rpool” (because that’s what Linux used to need, it’s more convention than anything, but if you choose something else be warned sometimes scripts are hard-coded to expect “rpool” and you might run into trouble – it’s a lot safer just to call it “rpool”).

It will be a mirror (you can choose something else at your own risk) – so if data is corrupted on one partition ZFS should be able to automatically recover it from the other.

root@rescue ~ # zpool create -o ashift=12 rpool mirror /dev/mapper/crypt1 /dev/mapper/crypt2
root@rescue ~ # zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            crypt1  ONLINE       0     0     0
            crypt2  ONLINE       0     0     0

errors: No known data errors
root@rescue ~ #

Create a swap volume that is the same size as the memory in your dedicated server (so if you have 16G of ram add the -V 16G parameter). Format the swap. Then create a root ZFS filesystem and turn on compression of the filesystem.

root@rescue ~ # zfs create -V 32G -b 4096 -o compression=off -o primarycache=metadata -o secondarycache=none -o sync=always rpool/SWAP
root@rescue ~ # mkswap /dev/zvol/rpool/SWAP
Setting up swapspace version 1, size = 33554428 KiB
no label, UUID=7e475689-c8c1-2a41-552c-8ff37cec3be1

root@rescue ~ # zfs create rpool/ROOT
root@rescue ~ # zfs set compression=lz4 rpool/ROOT

Unmount the ZFS filesystem (just unmount all ZFS filesystems) and configure the mount point of the root ZFS filesystem. Tell the pool that it should boot into the root ZFS filesystem. Finally export the pool so we can import it again later at a temporary location.

root@rescue ~ # zfs unmount -a # unmount all ZFS filesystems
root@rescue ~ # zfs set mountpoint=/ rpool/ROOT

root@rescue ~ # zpool set bootfs=rpool/ROOT rpool

root@rescue ~ # zpool export rpool # in preparation for dummy mount
root@rescue ~ # zpool import # confirm pool is available to import
   pool: rpool
     id: 14246658913528246541
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        rpool       ONLINE
          mirror-0  ONLINE
            crypt1  ONLINE
            crypt2  ONLINE

root@rescue ~ # zpool import -R /mnt/rpool rpool # import to different mount

From this point forward the Ubuntu system we will build will go into /mnt/rpool/. There is another “rpool” directory under this – ignore that – that is the pool’s directory, but the filesystem’s directory (rpool/ROOT) is actually the temporary mount point /mnt/rpool/.

Create and Mount Boot Partition

For completeness format both /dev/sda1 and /dev/sdb1 (boot partitions). But we’ll only mount (and install to) one of them. You may want to back up the /dev/sda1 boot partition when this is all done.

root@rescue ~ # mkfs.ext4 -L "sda_boot" /dev/sda1
root@rescue ~ # mkfs.ext4 -L "sdb_boot" /dev/sdb1

root@rescue ~ # mkdir /mnt/rpool/boot # mount point
root@rescue ~ # mount /dev/sda1 /mnt/rpool/boot

Bootstrap

This is the fun part! Bootstrapping the initial operating system into the target location (our ZFS root filesystem and mounted /boot partition).

But there’s a problem. Jessie doesn’t come with support for bootstrapping Ubuntu Xenial (16.04). In order to do this you will have to find an existing Xenial system and copy the following files onto your rescue host:

  • /usr/share/keyrings/ubuntu-archive-keyring.gpg
  • /usr/share/debootstrap/scripts/xenial

You should now have these files on your rescue system, if you don’t you cannot proceed (or you’ll have to choose a different distribution).

root@rescue ~ # ls /usr/share/keyrings/ubuntu-archive-keyring.gpg
/usr/share/keyrings/ubuntu-archive-keyring.gpg
root@rescue ~ # ls /usr/share/debootstrap/scripts/xenial
/usr/share/debootstrap/scripts/xenial

Now do the bootstrap! Optionally, after it is complete, take a ZFS snapshot of the filesystem in case we want to roll back later.

root@rescue ~ # time debootstrap --arch=amd64 xenial /mnt/rpool
I: Base system installed successfully.

real    3m13.981s
user    0m30.020s
sys     0m9.184s

root@rescue ~ # zfs snapshot rpool/ROOT@after-base-install # optional

Chroot Into Image

From this point forward we will be in a chroot which will feel like we’re in the newly installed operating system. We aren’t, really, but it will feel that way. And we can make changes necessary before the first boot.

Set up the bind mounts (required for the chroot to function properly), copy resolv.conf to the /run directory (required to resolve domain names inside the chroot) and chroot.

root@rescue ~ # mkdir /run/resolvconf
root@rescue ~ # cp /etc/resolv.conf /run/resolvconf/

root@rescue ~ # for i in dev dev/pts proc sys run; do echo ==$i==; mount --bind /$i /mnt/rpool/$i; done
root@rescue ~ # chroot /mnt/rpool /bin/bash --login
root@rescue:/#

Chroot: Hostname

Edit /etc/hostname to contain your desired hostname.

Add a line to /etc/hosts that contains the text “127.0.1.1 bigguns” (where “bigguns” should be your_hostname, the same as you put into /etc/hostname in the step above).

Chroot: Use Hetzner Mirror

Optional, but if you want to use the local mirror, edit /etc/apt/sources.list and use the Xenial sources specified in the Hetzner Ubuntu Aptitude mirror document:

deb http://mirror.hetzner.de/ubuntu/packages xenial           main restricted universe multiverse
deb http://mirror.hetzner.de/ubuntu/packages xenial-updates   main restricted universe multiverse
deb http://mirror.hetzner.de/ubuntu/packages xenial-backports main restricted universe multiverse
deb http://mirror.hetzner.de/ubuntu/packages xenial-security  main restricted universe multiverse

After doing this do an apt update.

root@rescue:/# apt-get update
Fetched 16.9 MB in 2s (7286 kB/s)               
Reading package lists... Done

Chroot: Set Up Filesystem Tables

Find out the UUID of the boot device and add it to /etc/fstab:

root@rescue:/# blkid |grep /dev/sda1
/dev/sda1: LABEL="sda_boot" UUID="4049ab3c-a015-46b3-a594-d23ac8218c62" TYPE="ext4" PARTLABEL="myboot" PARTUUID="43150b47-89df-4765-825a-edd22898605d"

root@rescue:/# echo "/dev/disk/by-uuid/4049ab3c-a015-46b3-a594-d23ac8218c62 /boot auto defaults 0 1" >>/etc/fstab

Add more lines to /etc/fstab:

root@rescue:/# echo "/dev/mapper/crypt1 / zfs defaults 0 0" >>/etc/fstab
root@rescue:/# echo "/dev/mapper/crypt2 / zfs defaults 0 0" >>/etc/fstab
root@rescue:/# echo "/dev/zvol/rpool/SWAP none swap defaults 0 0" >>/etc/fstab

If you’re wondering why there are two root directory entries there – they are so cryptsetup can find them while running initramfs hooks and ensures that, during boot, you are offered to provide the password to decrypt both LUKS encrypted partitions before continuing.

Add LUKS encrypted partitions to /etc/crypttab:

root@rescue:/# blkid |grep LUKS
/dev/sda2: UUID="fcc1c632-0a5d-4061-be4e-18d2097e91b0" TYPE="crypto_LUKS" PARTLABEL="mycrypt" PARTUUID="809fb832-db3a-47a6-8e0d-b7d0491011e8"
/dev/sdb2: UUID="51451730-00d5-48e5-a9ed-bd6f9d0260cb" TYPE="crypto_LUKS" PARTLABEL="mycrypt" PARTUUID="9877c3c6-5b21-4b69-bc5a-e346ae1567ca"

root@rescue:/# echo "crypt1 UUID=fcc1c632-0a5d-4061-be4e-18d2097e91b0 none luks" >>/etc/crypttab
root@rescue:/# echo "crypt2 UUID=51451730-00d5-48e5-a9ed-bd6f9d0260cb none luks" >>/etc/crypttab

Symlink LUKS container devices or update-grub will complain it cannot find canonical path and error (later during installation).

root@rescue:/# ln -s /dev/mapper/crypt1 /dev/crypt1
root@rescue:/# ln -s /dev/mapper/crypt2 /dev/crypt2

Assure that future kernel updates will succeed by always creating the symbolic link:

root@rescue:/# echo 'ENV{DM_NAME}=="crypt1", SYMLINK+="crypt1"' > /etc/udev/rules.d/99-local-crypt.rules
root@rescue:/# echo 'ENV{DM_NAME}=="crypt2", SYMLINK+="crypt2"' >> /etc/udev/rules.d/99-local-crypt.rules

Chroot: Ensure Apt Packages for Filesystems Present

Install PPA support first.

root@rescue:/# locale-gen en_US.UTF-8 # always add this even if you want another language
root@rescue:/# locale-gen en_GB.UTF-8
root@rescue:/# apt-get install ubuntu-minimal software-properties-common

Get packages related to encryption, ZFS, and GRUB.

root@rescue:/# apt-get install cryptsetup # or may not be able to unlock at boot
root@rescue:/# apt-get install zfs-initramfs zfs-dkms # we need ZFS
root@rescue:/# apt-get install grub2-common grub-pc # add GRUB for booting

As GRUB is installing an interactive choice may be given for which devices to install to, choose /dev/sda and /dev/sdb.

Ensure GRUB boots into ZFS by adding “boot=zfs” to /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="boot=zfs nosplash"

Make ZFS calm down (it will access the disks every 5 seconds by default flushing buffers) to access the disks once every 30 seconds by editing (or creating) /etc/modprobe.d/zfs.conf and adding the line:

options zfs zfs_txg_timeout=30

Update /usr/share/initramfs-tools/hooks/cryptroot and uncomment out the return in function get_fs_devices() so that all the encrypted drives will be unlocked during boot (or else just the first drive will, which is no good for a mirror).

get_fs_devices() {
  local device mount type options dump pass
  local wantmount="$1"

  if [ ! -r /etc/fstab ]; then
    return 1
  fi

  grep -s '^[^#]' /etc/fstab | \
    while read device mount type options dump pass; do
      if [ "$mount" = "$wantmount" ]; then
        local devices
        if [ "$type" = "btrfs" ]; then
          for dev in $(btrfs filesystem show $(canonical_device "$device" --no-simplify) 2>/dev/null | sed -r -e 's/.*devid .+ path (.+)/\1/;tx;d;:x') ; do
            devices="$devices $(canonical_device "$dev")"
          done
        else
          devices=$(canonical_device "$device") || return 0
        fi
        echo "$devices"
        #return ## COMMENT OUT THIS LINE
      fi
    done
}

Install a Linux image. This can take a while so turning off sync on the filesystem can speed it up because of the number of files in the headers package.

root@rescue:/# zfs get sync rpool/ROOT
NAME        PROPERTY  VALUE     SOURCE
rpool/ROOT  sync      standard  default

root@rescue:/# zfs set sync=disabled rpool/ROOT

root@rescue:/# time apt-get install linux-image-generic linux-headers-generic
real    4m11.613s
user    2m23.124s
sys     0m27.472s

root@rescue:/# zfs set sync=standard rpool/ROOT

Chroot: Use Dropbear to Allow Disk Unlocking During Boot Via SSH

Thanks go to this article which describes the process of adding dropbear to the boot process allowing decryption of disks over SSH without the need for a KVM.

Install packages.

root@rescue:/# apt-get install dropbear busybox

Update /etc/initramfs-tools/initramfs.conf and ensure the following lines are present:

BUSYBOX=y
DROPBEAR=y

Create dropbear keys:

root@rescue:/# mkdir /etc/initramfs-tools/root
root@rescue:/# mkdir /etc/initramfs-tools/root/.ssh
root@rescue:/# dropbearkey -t rsa -f /etc/initramfs-tools/root/.ssh/id_rsa.dropbear

Convert dropbear key to openssh format:

root@rescue:/# /usr/lib/dropbear/dropbearconvert dropbear openssh /etc/initramfs-tools/root/.ssh/id_rsa.dropbear /etc/initramfs-tools/root/.ssh/id_rsa

Extract public key from /etc/initramfs-tools/root/.ssh/id_rsa:

root@rescue:/# dropbearkey -y -f /etc/initramfs-tools/root/.ssh/id_rsa.dropbear | grep "^ssh-rsa " > /etc/initramfs-tools/root/.ssh/id_rsa.pub

Add public key to authorized_keys file:

root@rescue:/# cat /etc/initramfs-tools/root/.ssh/id_rsa.pub >> /etc/initramfs-tools/root/.ssh/authorized_keys
root@rescue:/# chmod 600 /etc/initramfs-tools/root/.ssh/authorized_keys

To enable the start of dropbear add or update /etc/default/dropbear:

NO_START=0

Copy into /etc/initramfs-tools/hooks/crypt_unlock.sh script from this link.

root@rescue:/# chmod 755 /etc/initramfs-tools/hooks/crypt_unlock.sh

Disable dropbear service on boot so openssh is used after partition is decrypted.

root@rescue:/# update-rc.d dropbear disable

TAKE A COPY OF /etc/initramfs-tools/root/.ssh/id_rsa (the private key)! You’ll need it to ssh into dropbear later!

Chroot: Add OpenSSH, Set Root Password, Deal With Ethernet, Rebuilt initramfs

Install OpenSSH server and change the port to something obscure.

root@rescue:/# apt-get install openssh-server

root@rescue:/# perl -p -i.bak -e 's/^Port.*$/Port 222/' /etc/ssh/sshd_config

Set root password (or add your own public key to /root/.ssh/authorized_keys, but if you do ensure you set permissions to 600).

root@rescue:/# passwd root

Now for the very tricky part. Ubuntu has changed the names of Ethernet ports from eth0, eth1 etc to tricky things like eno1, ens1, enp4s0.

The only way to really know what your Ethernet interface will be named is to reboot, fall into busybox, and type dmesg |grep -i eth – but that involves a KVM being installed on your server which is difficult and potentially expensive.

In my case I’m guessing my Ethernet name (enp4s0) was gathered from:

root@rescue:/# dmesg |grep eth0
[    1.459268] e1000e 0000:04:00.0 eth0: Intel(R) PRO/1000 Network Connection

But I may be completely wrong about that.

Why is this important? Because you want to be able to create a file like the following:

root@rescue:/# echo -en "auto enp4s0\niface enp4s0 inet dhcp\n" >/etc/network/interfaces.d/enp4s0

Now update initramfs or you may be missing important services (like cryptsetup) on boot.

root@rescue:/# update-initramfs -c -k all
update-initramfs: Generating /boot/initrd.img-4.4.0-93-generic
cryptsetup: WARNING: could not determine root device from /etc/fstab

root@rescue:/# update-grub
Generating grub configuration file ...
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.4.0-93-generic
Found initrd image: /boot/initrd.img-4.4.0-93-generic
done

Chroot: Dist Upgrade and Reboot

root@rescue:/# apt-get update
root@rescue:/# apt-get dist-upgrade

Escape from chroot.

root@rescue:/# exit
root@rescue ~ # for i in run sys proc dev/pts dev; do echo ==$i==; umount /mnt/rpool/$i; done
root@rescue ~ # umount /mnt/rpool/boot
root@rescue ~ # zfs umount -a
root@rescue ~ # zpool export rpool

root@rescue ~ # shutdown -r now

Miscellaneous

Clean Up on Boot

After a successful boot into Linux I had some /lib/cryptsetup/askpass processes lying around. So I added the following lines to my /etc/rc.local file:

/bin/ps ax |/bin/grep /lib/cryptsetup/askpass |/usr/bin/cut -c1-5 |/usr/bin/xargs /bin/kill
sleep 5
/bin/ps ax |/bin/grep /lib/cryptsetup/askpass |/usr/bin/cut -c1-5 |/usr/bin/xargs /bin/kill -9

Setting NTP

Ubuntu 16.04 uses timesyncd for synchronisation.

If desired Hetzner’s own NTP servers can be used.

Edit the file /etc/systemd/timesyncd.conf and add the lines (if missing):

[Time]
NTP=ntp1.hetzner.de ntp2.hetzner.com ntp3.hetzner.net
FallbackNTP=0.de.pool.ntp.org 1.de.pool.ntp.org 2.de.pool.ntp.org 3.de.pool.ntp.org

Check the service is running:

root@myhost:/# service systemd-timesyncd status
 systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendo
r preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
           └─disable-with-time-daemon.conf
   Active: active (running) since Thu 2017-09-07 03:13:33 UTC; 1h 59min ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 1851 (systemd-timesyn)
   Status: "Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com)."
   CGroup: /system.slice/systemd-timesyncd.service
           └─1851 /lib/systemd/systemd-timesyncd

Sep 07 03:13:33 lalor systemd[1]: Starting Network Time Synchronization...
Sep 07 03:13:33 lalor systemd[1]: Started Network Time Synchronization.
Sep 07 03:14:04 lalor systemd-timesyncd[1851]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).

root@myhost:/# service systemd-timesyncd restart
root@myhost:/# service systemd-timesyncd status
 systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendo
r preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
           └─disable-with-time-daemon.conf
   Active: active (running) since Thu 2017-09-07 05:15:18 UTC; 4s ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 2754 (systemd-timesyn)
   Status: "Synchronized to time server 213.239.239.164:123 (ntp1.hetzner.de)."
   CGroup: /system.slice/systemd-timesyncd.service
           └─2754 /lib/systemd/systemd-timesyncd

Sep 07 05:15:18 lalor systemd[1]: Starting Network Time Synchronization...
Sep 07 05:15:18 lalor systemd[1]: Started Network Time Synchronization.
Sep 07 05:15:18 lalor systemd-timesyncd[2754]: Synchronized to time server 213.239.239.164:123 (ntp1.hetzner.de).

You may also want to change the system timezone to Europe/Berlin (or whatever preference you have):

root@myhost:/# dpkg-reconfigure tzdata

Recovery in Rescue

More likely than not something went wrong. To get back into your chroot in Rescue mode here’s the cheatsheet.

Entering Chroot

root@rescue ~ # apt-get remove -y linux-image-4.12.4 linux-headers-4.12.4
root@rescue ~ # time apt-get install -t jessie-backports zfs-dkms

root@rescue ~ # cryptsetup luksOpen /dev/sda2 crypt1
root@rescue ~ # cryptsetup luksOpen /dev/sdb2 crypt2

root@rescue ~ # zfs unmount -a # unmount all ZFS filesystems
root@rescue ~ # zpool export rpool # in preparation for dummy mount
root@rescue ~ # zpool import -R /mnt/rpool rpool # import to different mount
root@rescue ~ # mount /dev/sda1 /mnt/rpool/boot

root@rescue ~ # mkdir /run/resolvconf
root@rescue ~ # cp /etc/resolv.conf /run/resolvconf/

root@rescue ~ # for i in dev dev/pts proc sys run; do echo ==$i==; mount --bind /$i /mnt/rpool/$i; done
root@rescue ~ # chroot /mnt/rpool /bin/bash --login

And inside the chroot:

root@rescue:/# ln -s /dev/mapper/crypt1 /dev/crypt1
root@rescue:/# ln -s /dev/mapper/crypt2 /dev/crypt2

After Leaving Chroot

root@rescue ~ # for i in run sys proc dev/pts dev; do echo ==$i==; umount /mnt/rpool/$i; done
root@rescue ~ # umount /mnt/rpool/boot
root@rescue ~ # zfs umount -a
root@rescue ~ # zpool export rpool

1 responses to “Installing ZFS on LUKS on Ubuntu 16.04 on Hetzner Dedicated Server

  1. Charlie 2017-11-16 at 09:03:33

    Hi,

    Thanks for the post!

    I’ve been trying to get past an error in the process – I get a “Login attempt for nonexistent user” error as generated from dropbear in “svr-auth.c” when I try to log in to provide the root drive keys. Granted, I’ve adapted your post and am setting up Debian 8 (instead of Ubuntu 16), and I chose a backported dropbear 2017.75, but what’s confusing me is the comment earlier in the function “Check that the username exists and isn’t disallowed (root), and has a valid shell.”.

    If you have any insight as to what might be going on here, I’d appreciate it. I made sure to copy the ldd-verified dependences of dropbear into the initramfs, and I have the nsswitch.conf in the initramfs too. I’ve also confirmed that I follow all the steps that you suggest above (adapted to Debian 8 though).

    Since /etc/passwd and such are so basic in the initramfs, I don’t know what could be going wrong. The comment seems to suggest that a root username isn’t allowed – which is confusing to me. I thought I might try to tweak the dropbear source to be much less stringent on these kinds of checks, since there is only one user (root) of interest, and the authentication is to happen by way of public/private key pair, not password. But if a critical field is missing (like ses.authstate.pw_name not set), I don’t know if there is a straightforward workaround. It’s also not clear how to test this without the cycle of booting to the live CD and back to the OS.

    Any help here would be appreciated.

Leave a comment