Documenting Problems That Were Difficult To Find The Answer To

Category Archives: Linux

Copy Timestamp From One File to Another in Linux

Tested on Ubuntu 16.04.

# get last modified time in seconds past the epoch
export SECS=$(/usr/bin/stat -c "%Y" "${FILE_SRC})

# set modified time to seconds past the epoch
touch --date=@${SECS} "${FILE_DST}"

Audacious Only Plays One Song Then Stops

If Audacious is only playing a single song then stopping – even though you have a large playlist – then it may be the following:

  • open the menu by clicking on the tiny icon to the left of the word “AUDACIOUS” at the top left of the player window
  • choose Playback > No Playlist Advance (Ctrl+N) if ticked

Changing Active Titlebar Background on Greybird Theme in Xubuntu/XFCE

Simply edit the file /usr/share/themes/Greybird/xfwm4/title-1-active.png with the background colour you want (which will be replicated along the width of the title bar of the active window).

Then go to the menu, select Settings > Window Manager, then go to the Style tab, and choose a different Theme (such as Default) and then select the newly edited theme Greybird again.

Configuring RAS Daemon for Lenovo ThinkServer TS140

The RAS daemon (rasdaemon) can be installed on Ubuntu Linux 16.04 using the command:

$ sudo apt-get install rasdaemon

Once this is installed the ThinkServer TS140 requires a custom file to be created in /etc/ras/dimm_labels.d/ with the following file format:

# Vendor: 
#   Model: 
#     :  ...

… although it seems the old Edac label format is also valid:

# Vendor: 
#   Model: 
#     :  ..

In order to discover the values required one can run the following commands:

$ for i in /sys/devices/system/edac/mc/mc0/rank*; do \
  echo ==$i==; \
  cat "$i/dimm_location"; echo -ne "\n"; \

csrow 0 channel 0 
csrow 0 channel 1 
csrow 1 channel 0 
csrow 1 channel 1 
csrow 2 channel 0 
csrow 2 channel 1 
csrow 3 channel 0 
csrow 3 channel 1 

The board name and vendor can also be found:

$ cat /sys/devices/virtual/dmi/id/board_vendor
$ cat /sys/devices/virtual/dmi/id/board_name
ThinkServer TS140

Then a file (e.g. “lenovo-thinkserver-ts140.txt”) can be created in /etc/ras/dimm_labels.d/:

Vendor: LENOVO
  Model: ThinkServer TS140
    DIMM1_0: 0.0.0
    DIMM1_1: 0.0.1
    DIMM2_0: 0.1.0
    DIMM2_1: 0.1.1
    DIMM3_0: 0.2.0
    DIMM3_1: 0.2.1
    DIMM4_0: 0.3.0
    DIMM4_1: 0.3.1

And the rasdaemon restarted:

$ sudo systemctl status rasdaemon
$ sudo systemctl restart rasdaemon

Then the script can be run:

$ ras-mc-ctl --layout
          |                      mc0                      |
          |  csrow0   |  csrow1   |  csrow2   |  csrow3   |
channel1: |  4096 MB  |  4096 MB  |  4096 MB  |  4096 MB  |
channel0: |  4096 MB  |  4096 MB  |  4096 MB  |  4096 MB  |

$ ras-mc-ctl --print-labels
LOCATION                            CONFIGURED LABEL     SYSFS CONTENTS      
mc0 csrow 0 channel 0               DIMM1_0              DIMM1_0             
mc0 csrow 0 channel 1               DIMM1_1              DIMM1_1             
mc0 csrow 1 channel 0               DIMM2_0              DIMM2_0             
mc0 csrow 1 channel 1               DIMM2_1              DIMM2_1             
mc0 csrow 2 channel 0               DIMM3_0              DIMM3_0             
mc0 csrow 2 channel 1               DIMM3_1              DIMM3_1             
mc0 csrow 3 channel 0               DIMM4_0              DIMM4_0             
mc0 csrow 3 channel 1               DIMM4_1              DIMM4_1    

The first time you do this the new labels may not be registered, e.g. you see the following:

$ ras-mc-ctl --print-labels
LOCATION                            CONFIGURED LABEL     SYSFS CONTENTS      
mc0 csrow 0 channel 0               DIMM1_0              mc#0csrow#0channel#0
mc0 csrow 0 channel 1               DIMM1_1              mc#0csrow#0channel#1
mc0 csrow 1 channel 0               DIMM2_0              mc#0csrow#1channel#0
mc0 csrow 1 channel 1               DIMM2_1              mc#0csrow#1channel#1
mc0 csrow 2 channel 0               DIMM3_0              mc#0csrow#2channel#0
mc0 csrow 2 channel 1               DIMM3_1              mc#0csrow#2channel#1
mc0 csrow 3 channel 0               DIMM4_0              mc#0csrow#3channel#0
mc0 csrow 3 channel 1               DIMM4_1              mc#0csrow#3channel#1

The “SYSFS” contents can be found:

$ for i in /sys/devices/system/edac/mc/mc0/rank*; \
  do echo "$i/dimm_label: $(cat $i/dimm_label)"; \
/sys/devices/system/edac/mc/mc0/rank0/dimm_label: mc#0csrow#0channel#0
/sys/devices/system/edac/mc/mc0/rank7/dimm_label: mc#0csrow#3channel#1

So you can register the configured labels by running:

$ sudo ras-mc-ctl --register-labels
$ ras-mc-ctl --print-labels
LOCATION                            CONFIGURED LABEL     SYSFS CONTENTS      
mc0 csrow 0 channel 0               DIMM1_0              DIMM1_0

In Ubuntu the rasdaemon service is defined in /etc/systemd/system/ and contains the line:

ExecStart=/usr/sbin/rasdaemon -f -r

This instructs the daemon to start in foreground mode and to record events to a SQLite3 database.

The SQLite3 database is located at /var/lib/rasdaemon/ras-mc_event.db:

$ sqlite3 /var/lib/rasdaemon/ras-mc_event.db .tables
aer_event     extlog_event  mc_event      mce_record  

$ sqlite3 /var/lib/rasdaemon/ras-mc_event.db ".schema mc_event"
CREATE TABLE mc_event (
  timestamp TEXT,
  err_count INTEGER,
  err_type TEXT,
  err_msg TEXT,
  label TEXT,
  top_layer INTEGER,
  middle_layer INTEGER,
  lower_layer INTEGER,
  address INTEGER,
  grain INTEGER,
  syndrome INTEGER,
  driver_detail TEXT

Add LetsEncrypt to Reverse Proxy in Ubuntu Xenial

A rough picture of what is being attempted here is as follows (although I haven’t split out SSL and non-SSL virtual hosts here):

              ____________          ______________
             |            |        |              |
Internet --> | Rev Proxy  | - / -> |   Backend    |
             | |        | |
             |____________|        |______________|
             | port 9876 |
             | |
             |  certbot  |

First the certbot utility needs to be installed. To do this add the certbot repository to the /etc/apt/sources.list file:

deb xenial main

Attempt to update the apt repository.

$ sudo apt-get update
Get:5 xenial InRelease [24.3 kB]
Ign:5 xenial InRelease         
Get:10 xenial/main amd64 Packages [18.6 kB]
Get:11 xenial/main Translation-en [10.9 kB]
Fetched 3,495 kB in 1s (2,800 kB/s)                                
Reading package lists... Done
W: GPG error: xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8C47BE8E75BCA694
W: The repository ' xenial InRelease' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.

We need to import the key to be able to install from this repository.

$ sudo apt-key adv --keyserver --recv-keys 8C47BE8E75BCA694
Executing: /tmp/tmp.X7oOFSoctO/ --keyserver
gpg: requesting key 75BCA694 from hkp server
gpg: key 75BCA694: public key "Launchpad PPA for certbot" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

Next, install certbot.

$ sudo apt-get install certbot
0 to upgrade, 31 to newly install, 0 to remove and 3 not to upgrade.
Need to get 2,142 kB of archives.
After this operation, 11.4 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

Now, update the Apache site configuration to proxy-pass (reverse proxy) certbot validation requests to a port that certbot will listen to during certificate requests. For example, I put the following into /etc/apache2/sites-enabled/000-default.conf:

  ProxyPass "/.well-known/" ""
  ProxyPassReverse "/.well-known/" ""

If you haven’t already you’ll need to include the proxy modules into Apache:

$ cd /etc/apache2/mods-enabled
$ sudo ln -s ../mods-available/proxy.* .
$ sudo ln -s ../mods-available/proxy_http.* .
$ sudo service apache2 restart

Now we can do a dry-run of certbot to test everything is functioning:

$ sudo certbot certonly \
    --preferred-challenges http-01 --http-01-port=9876 \
    --no-verify-ssl --dry-run \
    --domain \

Saving debug log to /var/log/letsencrypt/letsencrypt.log

How would you like to authenticate with the ACME CA?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: Spin up a temporary webserver (standalone)
2: Place files in webroot directory (webroot)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1

 - The dry run was successful.
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.

Now create the certificate (do the same but without a dry run):

$ sudo certbot certonly \
    --preferred-challenges http-01 --http-01-port=9876 \
    --no-verify-ssl \
    --domain \

 - Congratulations! Your certificate and chain have been saved at:
   Your key file has been saved at:
   Your cert will expire on 2020-03-31. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:
   Donating to EFF:          

Next we’ve got to add the created certificate to our Apache site configuration for the SSL virtual server, in my case this is file /etc/apache2/sites-enabled/001-default-ssl.conf:

    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/
    SSLCertificateChainFile /etc/letsencrypt/live/

As for the reverse proxy part – you can reverse proxy to a SSL website internally using something like the following for your SSL virtual server configuration (within the <VirtualHost> tags).

    # disable SSL certificate checks for back-end server
    SSLProxyEngine on
    SSLProxyCheckPeerName off
    SSLProxyCheckPeerExpire off
    SSLProxyVerify none
    SSLProxyCheckPeerCN off

    # set X-Forwarded-For header to track actual IP of incoming request
    ProxyPreserveHost On

    ProxyPass "/" ""
    ProxyPassReverse "/" ""

On the backend server you’ll probably want to log the actual IP of the external request, rather than the IP of the reverse proxy.

To do this enable the mod_remoteip module to your enabled modules:

$ cd /etc/apache2/mods-enabled
$ sudo ln -s ../mods-available/remoteip.* .
$ sudo service apache2 restart

Configure your virtual server to use the remoteip module as well as update the custom logging configuration:

  # note %h is usually present in LogFormat for host, but use %a instead for mod_remoteip
  LogFormat "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" combined_forwarded

  # use the address from X-Forwarded-For in access log instead of the reverse proxy
  CustomLog /var/log/apache2/access_ssl.log combined_forwarded

  # use the IP address from the X-Forwarded-For from requests from reverse proxy
  RemoteIPHeader X-Forwarded-For

USB Disk Geometry Problems

After plugging in my external hard disk attached to a SATA-to-USB interface I got the following messages in my /var/log/syslog:

Nov  4 11:52:48 myserver kernel: [5764041.788001] Buffer I/O error on dev sdg, logical block 1953498352, async page read

At first I had no idea what this meant, so I performed a smartctl -H -a /dev/sdg check which came back clean (no issues with the disk).

So I proceeded to try and import this into ZFS but the single-disk pool displayed as faulted:

me@myserver:~$ sudo zpool import
   pool: mypool
     id: 12345678912345678912
  state: FAULTED
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.

        mypool      FAULTED  corrupted data
          sdg       FAULTED  corrupted data

Then I proceed to check the ZFS labels on the disk:

me@myserver:~$ sudo zdb -l /dev/sdg
    version: 5000
failed to read label 2
failed to read label 3

Ah, I’d come across this before. The cause? Trying to read my hard drive with a different SATA-to-USB adaptor than the one I originally formatted the disk with. It seems that sometimes different brands of SATA-to-USB adaptors can see different sizes of disk.

Specifically: I was using the SATA-to-USB circuit that I had pulled out of an external Western Digital disk drive. It seems that this interface doesn’t bother to check the actual size of the hard disk that is plugged in, it seems to be hard coded.

From dmesg:

me@myserver:~$ dmesg |grep "logical blocks:"
[5764041.758336] sd 11:0:0:0: [sdg] 15627986944 512-byte logical blocks: (8.00 TB/7.28 TiB)

But I’d actually plugged in a 4TB drive, not 8TB. When I tried with a separate SATA-to-USB adaptor it gave the correct number:

me@myserver:~$ dmesg |grep "logical blocks:"
[5764385.029248] sd 12:0:0:0: [sdg] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)

So maybe you have a drive with problems. But maybe you are just using an interface that isn’t correctly recognising the actual size of the drive you’ve plugged in.

Ubuntu Xenial Booting and Seeing Cryptsetup: LVM Is Not Available

I was in the process of replacing a hard drive on my root pool (rpool) ZFS zpool. I had taken out one of the hard drives my system relied upon when booting, it was a drive for which cryptsetup expected me to type a password when booting.

For several minutes I saw the following messages during boot (and finally a BusyBox shell):

cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
  ALERT! /dev/disk/by-uuid/... does not exist,
        Check cryptopts=source= bootarg: cat /proc/cmdline
        or missing modules, devices: cat /proc/modules; ls /dev
-r Dropping to a shell. Will skip /dev/disk/by-uuid/... if you can't fix.
/scripts/panic/plymouth: line 18: /bin/plymouth: not found

BusyBox v1.22.1 (Ubuntu 1:1.22.0-15ubuntu1.4) built-in shell (ash)
Enter 'help' for a list of built-in commands.


To boot into my rpool I did the following:

(initramfs) # figure out the existing device to decrypt
(initramfs) cat /conf/conf.d/cryptroot

(initramfs) # find out what device has the UUID I'm looking for
(initramfs) blkid
/dev/sda: UUID="..." TYPE="crypto_LUKS"
/dev/sdb: UUID="..." TYPE="crypto_LUKS"

(initramfs) # open the encrypted disk that should still be working
(initramfs) cryptsetup luksOpen /dev/sdb crypt2
Enter passphrase for /dev/sdb: ********

(initramfs) # enable ZFS, and import the root rpool
(initramfs) /sbin/modprobe zfs
(initramfs) zpool import
  pool: rpool
    id: ...
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
       devices and try again.
       rpool      UNAVAIL  missing device
         mirror-0 DEGRADED
           ...    UNAVAIL
           crypt2 ONLINE

(initramfs) zpool import -f -R / -m -N rpool
(initramfs) exit

At this point I was offered to unlock my other disks as per the usual boot sequence and the system booted into a degraded root zpool successfully.

Using Dnsmasq as Caching Nameserver on Ubuntu Xenial

Setting up dnsmasq as a caching nameserver locally on Ubuntu Xenial (16.04.6 LTS) can speed up the Internet experience as, by default, Linux queries a nameserver every time a domain name is connected to – and this usually involves the round-trip time to the configured nameserver. It is so much quicker to have a response locally if it is cached.

First, install dnsmasq:

$ sudo apt-get install dnsmasq
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  dns-root-data dnsmasq-base libnetfilter-conntrack3
The following NEW packages will be installed:
  dns-root-data dnsmasq dnsmasq-base libnetfilter-conntrack3
0 upgraded, 4 newly installed, 0 to remove and 7 not upgraded.
Need to get 353 kB of archives.
After this operation, 972 kB of additional disk space will be used.
Do you want to continue? [Y/n] y

You may also want the lookup tool “dig” to test the dnsmasq install:

$ sudo apt-get install dnsutils
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  bind9-host libbind9-140 libdns162 libisc160 libisccc140 libisccfg140
Suggested packages:
The following NEW packages will be installed:
  bind9-host dnsutils libbind9-140 libdns162 libisc160 libisccc140
  libisccfg140 liblwres141
0 upgraded, 8 newly installed, 0 to remove and 7 not upgraded.
Need to get 1,338 kB of archives.
After this operation, 6,059 kB of additional disk space will be used.
Do you want to continue? [Y/n] y

Once dnsmasq has been installed create a custom cache configuration in the /etc/dnsmasq.d/ subdirectory:


# Listen on the given IP address(es).

# Listen on <port> instead of the standard DNS port (53).

# Force dnsmasq to really bind only the interfaces it is listening on.

# Log the results of DNS queries handled by dnsmasq.
# Enable a full cache dump on receipt of SIGUSR1.
# If the argument "extra" is supplied, ie --log-queries=extra then the
# log has extra information at the start of each line. This consists of
# a serial number which ties together the log lines associated with an
# individual query, and the IP address of the requestor.

# If the facility given contains at least one '/' character, it is
# taken to be a filename, and dnsmasq logs to the given file, instead
# of syslog. If the facility is '-' then dnsmasq logs to stderr.

# Tells dnsmasq to never forward A or AAAA queries for plain names,
# without dots or domain parts, to upstream nameservers. If the name is
# not known from /etc/hosts or DHCP then a "not found" answer is
# returned.

# All reverse lookups for private IP ranges (ie 192.168.x.x, etc) which
# are not found in /etc/hosts or the DHCP leases file are answered with
# "no such domain" rather than being forwarded upstream.

# Don't read the hostnames in /etc/hosts.

# Set the maximum number of concurrent DNS queries.

# Set the size of dnsmasq's cache.
# Setting the cache size to zero disables caching.

# Disable negative caching.

# This option gives a default value for time-to-live (in seconds) which
# dnsmasq uses to cache negative replies even in the absence of an SOA
# record.

# Read the IP addresses of the upstream nameservers from <file>,
# instead of /etc/resolv.conf.

# Don't poll /etc/resolv.conf for changes.

# Specify time-to-live for information from /etc/hosts.

# Set a maximum TTL value for entries in the cache.

# Setting this flag forces dnsmasq to try each query with each server
# strictly in the order they appear in /etc/resolv.conf

Next, create a custom resolv.conf file for dnsmasq to use:

# Google secondary DNS

# Cloudflare secondary DNS

We’re not finished! If we want to use our own resolv.conf file then we have to modify the defaults file for dnsmasq:


Alright, now we’re ready to start dnsmasq. Well it might already be running:

$ sudo systemctl status dnsmasq
* dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server
   Loaded: loaded (/lib/systemd/system/dnsmasq.service; enabled; vendor preset: enabled)
  Drop-In: /run/systemd/generator/dnsmasq.service.d
           `-50-dnsmasq-$named.conf, 50-insserv.conf-$named.conf
   Active: active (running) since Mon 2019-08-26 06:13:09 UTC; 1h 4min ago
  Process: 7170 ExecStop=/etc/init.d/dnsmasq systemd-stop-resolvconf (code=exited, status=0/SUCCESS)
  Process: 7224 ExecStartPost=/etc/init.d/dnsmasq systemd-start-resolvconf (code=exited, status=0/SUCCESS)
  Process: 7212 ExecStart=/etc/init.d/dnsmasq systemd-exec (code=exited, status=0/SUCCESS)
  Process: 7209 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCCESS)
 Main PID: 7223 (dnsmasq)
   CGroup: /system.slice/dnsmasq.service
           `-7223 /usr/sbin/dnsmasq -x /var/run/dnsmasq/ -u dnsmasq -7 /etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new --local-service

Aug 26 06:13:08 myhost systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...
Aug 26 06:13:08 myhost dnsmasq[7209]: dnsmasq: syntax check OK.
Aug 26 06:13:09 myhost systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.

Either way, start or restart the dnsmasq daemon:

$ sudo systemctl stop dnsmasq
$ sudo systemctl start dnsmasq

We can view the dnsmasq log:

$ cat /var/log/dnsmasq.log
Aug 26 07:21:10 dnsmasq[8118]: started, version 2.75 cachesize 250
Aug 26 07:21:10 dnsmasq[8118]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify
Aug 26 07:21:10 dnsmasq[8118]: reading /etc/resolv.dnsmasq
Aug 26 07:21:10 dnsmasq[8118]: using nameserver
Aug 26 07:21:10 dnsmasq[8118]: using nameserver
Aug 26 07:21:10 dnsmasq[8118]: read /etc/hosts - 4 addresses

How about testing with a looking?

$ dig @localhost -p 53
; <> DiG 9.10.3-P4-Ubuntu <> @localhost -p 53
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29520
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 1452
;             IN      A

;; ANSWER SECTION:      9864    IN      CNAME     266     IN      A

;; Query time: 7 msec
;; SERVER: ::1#53(::1)
;; WHEN: Mon Aug 26 07:24:27 UTC 2019
;; MSG SIZE  rcvd: 91

$ tail /var/log/dnsmasq.log
Aug 26 07:24:27 dnsmasq[8118]: query[A] from ::1
Aug 26 07:24:27 dnsmasq[8118]: forwarded to
Aug 26 07:24:27 dnsmasq[8118]: forwarded to
Aug 26 07:24:27 dnsmasq[8118]: reply is 
Aug 26 07:24:27 dnsmasq[8118]: reply is

Looks to be working. Interestingly, by default, dnsmasq queries all name servers simultaneously, at first, to determine which is responding the quickest, and will then tend to just query that one for a while, until it tries all the name servers again.

A few more things to finish up. Let’s tell Linux to use localhost to do DNS lookups in future:

# The primary network interface
auto ens5
iface ens5 inet static
        # dns-* options are implemented by the resolvconf package, if installed

And to make the change by hand until the next reboot you can edit /etc/resolv.conf directly to use as the only nameserver.

You may want to also add a logrotate configuration:

/var/log/dnsmasq.log {
        size 20M
        rotate 50
        create 644 dnsmasq root
                if systemctl status dnsmasq >/dev/null; then
                        systemctl stop dnsmasq >/dev/null;
                        touch /tmp/logrotate-dnsmasq-stopped.tmp;
                if [ -e /tmp/logrotate-dnsmasq-stopped.tmp ]; then
                        rm /tmp/logrotate-dnsmasq-stopped.tmp;
                        systemctl start dnsmasq >/dev/null;

Getting NVidia Working with LXC Container and Steam Game Client

I’ve written before about creating an LXC container with X11 and sound support.

The process is much the same for the Steam game client (which requires GLX). But I’ll go through the entire process along with the Steam-specific actions requires.

This is written specifically for Ubuntu Linux 16.04 Xenial. The LXC container created, including libraries and Steam client, consumes around 2.3GB of storage. Download the game Cities Skylines with Mass Transit and that blows out to 9.4GB.

It is assumed that you installed your NVIDIA drivers on your host (not in a LXC container) by running (or similar) as root outside of any X session which you downloaded from the NVidia Unix driver archive.

Firstly create a LXC container with the name “steam”.

$ sudo lxc-create -n mysteam -t ubuntu -- -r xenial
# The default user is 'ubuntu' with password 'ubuntu'!
# Use the 'sudo' command to run tasks as root in the container.
$ sudo lxc-start -d -n mysteam
$ sudo lxc-ls -f
mysteam       RUNNING 0         - -

Then we need to install a variety of packages. So enter a console session (remember that you will need to press ctrl-A, Q to exit the console when finished):

$ sudo lxc-console -n mysteam
mysteam login: ubuntu
Password: ubuntu

# required for X11 forwarding over SSH
ubuntu@mysteam:~$ sudo apt-get install xauth

# optional install for xclock application (for testing)
ubuntu@mysteam:~$ sudo apt-get install x11-apps

# exit console
ctrl-A, then q

Next to install audio. First confirm you have pulseaudio running on your (non-LXC) host:

$ xprop -root PULSE_SERVER
PULSE_SERVER(STRING) = "{0e1da16b3f5b8cc7f23766efa2f30673}unix:/run/user/1000/pulse/native tcp:localhost:4713 tcp6:localhost:4713"

Also, on your host, run paprefs, select the “Network Server” (2nd) tab, and make sure the first option “Enable network access to local sound devices” is ticked (you will have to do this every time you log into your X Windows). This will allow your container to send audio over a SSH session (more on that later).

Now – we pick a random port number that isn’t being used, say, 54321. In future when we SSH to the container we will have to tell the container that a connection to 54321 in the container should result in a connection to whatever the output of the xprop command was earlier (e.g. “localhost:4713”). That’s in addition to supporting X protocol over SSH. So you would use a SSH command like:

$ ssh -X -R 54321:localhost:4713 ubuntu@
ubuntu@'s password: ubuntu

# add the following line to /home/ubuntu/.bashrc
export PULSE_SERVER="tcp:localhost:54321"

# required for audio from container
ubuntu@mysteam:~$ sudo apt-get install pulseaudio

# logout from SSH session to container
ubuntu@mysteam:~$ exit

We should have working X11 forwarding and audio.

Now for the tricky part – NVidia and GLX support!

Edit the LXC container configuration, /var/lib/lxc/mysteam/config, to pass through the various devices used by the NVidia driver from the host to the container (thanks to this article for the information). Add the following (to the bottom of the configuration file, or anywhere):

# GPU Passthrough config
lxc.cgroup.devices.allow = c 195:* rwm
lxc.cgroup.devices.allow = c 243:* rwm
lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file

Copy in your NVidia driver file into the container, and restart the container (to pick up the configuration changes):

$ sudo cp ~/Downloads/ /var/lib/lxc/mysteam/rootfs/home/ubuntu/
$ sudo lxc-stop -n mysteam
$ sudo lxc-start -d -n mysteam

SSH back into the container and we will install the driver and GLX support:

$ ssh -X -R 54321:localhost:4713 ubuntu@
ubuntu@'s password: ubuntu

# Add 386 support, we'll need this when installing NVidia driver, or else Steam will complain with "glXChooseVisual failed" error
ubuntu@mysteam:~$ sudo dpkg --add-architecture i386
ubuntu@mysteam:~$ sudo apt-get update
ubuntu@mysteam:~$ sudo apt-get install libc6:i386

# Add pkg-config to minimise warnings during NVidia driver installation
ubuntu@mysteam:~$ sudo apt-get install pkg-config

# Set executable permissions for NVidia driver and execute
# - ignore warning about not installing a kernel module (we don't want it anyway)
# - ignore warning about being forced to guess X library path (we don't care)
# - select YES to install 32-bit compatibility libraries, if this option isn't presented then go back and install libc6:i386 package (Steam client will throw "glXChooseVisual" error if this step is missed)
# - ignore request to get your X config automatically updated (container is not running X client)
ubuntu@mysteam:~$ sudo chmod 755 /home/ubuntu/
ubuntu@mysteam:~$ sudo /home/ubuntu/ --no-kernel-module

# Test to see if NVidia card is found (only works if NVidia driver on host and container are absolutely identical)
ubuntu@mysteam:~$ nvidia-smi
Sun May 26 08:06:54 2019       
| NVIDIA-SMI 418.56       Driver Version: 418.56       CUDA Version: 10.1     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  GeForce GTX 750 Ti  Off  | 00000000:01:00.0  On |                  N/A |
| 30%   38C    P8     1W /  38W |    173MiB /  1993MiB |      0%      Default |
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |

# install GLX support
ubuntu@mysteam:~$ sudo apt-get install mesa-utils

# test whether GLX support is working
ubuntu@mysteam:~$ glxinfo
name of display: localhost:10.0
display: localhost:10  screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4

# if the following works you will have a moving image of gears on your display
ubuntu@mysteam:~$ glxgears

Honestly, that’s the hardest part done! Now all that is left is to install the Steam client.

Either download the Debian install package from or:

ubuntu@mysteam:~$ sudo apt-get install wget

# get the Debian steam client package, the URL comes from
ubuntu@mysteam:~$ wget

# install dependencies FIRST
# - if you forget this and run the steam install first and get dependency errors then run "apt-get remove steam-launcher" and then retry this command
ubuntu@mysteam:~$ sudo apt-get install python curl python-apt xterm zenity

# install package
ubuntu@mysteam:~$ sudo dpkg -i steam.deb

# run the Steam client! (ignore warnings)
ubuntu@mysteam:~$ steam &
Setting up Steam content in /home/ubuntu/.local/share/Steam
Steam needs to install these additional packages:
        libgl1-mesa-dri:i386, libgl1-mesa-glx:i386
[sudo] password for ubuntu: ubuntu
Updating Steam...
Downloading update (132,000 of 284,881 KB)...

And that’s it! You should have a running Steam client in Ubuntu Linux.

I haven’t figured out to get rid of all running processes when I’m finished with Steam – so I just shut down my container.

Remember, if you get the following from the Steam client, then you’ll need to reinstall NVidia driver with 32-bit compatibility libraries:

[2019-05-24 10:45:16] Verifying installation...
[2019-05-24 10:45:16] Performing checksum verification of executable files
[2019-05-24 10:45:18] Verification complete
glXChooseVisual failed
glXChooseVisual failedMain.cpp (332) : Assertion Failed: Fatal Error: glXChooseVisual failed
Main.cpp (332) : Assertion Failed: Fatal Error: glXChooseVisual failed

Listing Drives Opened with CryptSetup

So I’ve opened a drive using cryptsetup, e.g.:

$ sudo cryptsetup luksOpen /dev/sdf mycrypt
Enter passphrase for /dev/sdf: hunter2

Now I want to know what encrypted drives I have mounted so I can unmount them. Do to this run the following command:

$ sudo dmsetup ls
mycrypt (252:0)

I can then use this information to close my mount:

$ sudo cryptsetup luksClose mycrypt