newspaint

Documenting Problems That Were Difficult To Find The Answer To

Bunnings 12V Garden Spotlight With LED Globe

Bunnings sell a “HPM” brand 12V 5W garden spotlight with a halogen bulb for $5.80 as of writing this article.

Bunnings HPM 12V 5W garden light

Bunnings HPM 12V 5W garden light

I bought several of these along with a “HPM” brand 12V 60W transformer. It should be noted that the transformer outputs AC (alternating current), not DC (direct current).

I wanted to replace the halogen bulb with a LED bulb. I bought a 2-pack from Coles. They were 12V 6W 500 lumens bulbs of form type GU5.3.

Coles GU5.3 12V 6W 500 lumen bulb 2-pack

Coles GU5.3 12V 6W 500 lumen bulb 2-pack

First I had to pry off the front clear plastic pane – I used a knife to do this to lever against the black case around all sides until the cover came lose and could be pulled off.

Front clear cover pried off

Front clear cover pried off

Next I pulled the halogen bulb from the socket.

Halogen bulb removed from GU5.3 socket

Halogen bulb removed from GU5.3 socket

Then I prepared to insert the Coles LED bulb into the socket.

Coles GU5.3 12V bulb ready to be inserted into socket

Coles GU5.3 12V bulb ready to be inserted into socket

After inserting the LED bulb into the socket I actually drilled some holes underneath (not shown here) to improve ventilation. And then snapped the clear plastic cover back on the front.

Clear plastic cover snapped back into place

Clear plastic cover snapped back into place

The light being emitted is significantly stronger than that of the original halogen bulb. But it is worthwhile considering the risk of operating a LED bulb in an enclosed space which is generally not recommended.

Dovecot v2.3.2.1 and Solr v7.4.0

I found the instructions for getting Solr full-text searching (FTS) working with Dovecot rather difficult to follow.

I started off by downloading the latest build of Dovecot (v2.3.2.1 as of this article) because the Ubuntu build of dovecot (v2.2.2) does not have the solr plugin compiled,

After extracting the dovecot sources I ran the following commands to create a build:

$ sudo apt-get install clang-6.0 libmysqlclient-dev libexpat1-dev libssl-dev libsqlite3-dev

$ ./configure --prefix=/opt/dovecot-2.3.2.1 -with-solr --with-mysql --with-ssl=openssl --with-sqlite
$ nice make
$ sudo make install

For Solr I had to install a Java runtime:

$ sudo apt-get install openjdk-9-jre

After installing Solr and getting it running I ran the following command to create a core specifically for dovecot:

$ sudo -u solr -- bin/solr create_core -c dovecot

I had installed Solr in /opt/solr so the next thing I did was delete /opt/solr/server/solr/dovecot/conf/managed-schema and copied the dovecot-2.3.2.1/doc/solr-schema.xml file from the dovecot source to /opt/solr/server/solr/dovecot/conf/schema.xml and changing owner to solr:solr.

Then I had to force a reload of the Solr core, but there were problems.

The following substitutions were necessary in managed-schema (which is what the schema.xml file gets converted to on a reload of the core in Solr):

  • s/”text”/”text_general”/
  • s/”boolean”/”booleans”/
  • s/”plong”/”plongs”/

Then I had to comment out the following blocks in solrconfig.xml:

<!--
    <lst name="typeMapping">
      <str name="valueClass">java.util.Date</str>
      <str name="fieldType">pdates</str>
    </lst>
-->

<!--
    <lst name="typeMapping">
      <str name="valueClass">java.lang.Number</str>
      <str name="fieldType">pdoubles</str>
    </lst>
-->

I updated my /etc/dovecot/dovecot.conf:

mail_plugins = $mail_plugins fts fts_solr zlib

protocol imap {
  mail_plugins = $mail_plugins imap_zlib
}

plugin {
  fts = solr
  fts_solr = url=http://127.0.0.1:8983/solr/dovecot/ break-imap-search
  fts_autoindex=yes
  fts_autoindex_max_recent_msgs=5000
  fts_index_timeout=120
}

Finally I set up some cron jobs to ensure that Solr commits were conducted on a regular basis, an “optimize” was run every so often, and a re-index was done every so often:

*/5 * * * * ubuntu wget -O - 'http://127.0.0.1:8983/solr/dovecot/update?commit=true'
43 1,13 * * * ubuntu wget -O - 'http://127.0.0.1:8983/solr/dovecot/update?optimize=true'

37 * * * * root /opt/dovecot-2.3.2.1/bin/doveadm -c /etc/dovecot/dovecot.conf -v index -u "*" -n 20000 "*"

Hairpin for LXC Containers Using IPTables

Only recently when getting involved with Kubernetes did I discover the term “hairpinning“. It describes something I’d always wanted to do but did not know how.

Let’s say you have a host that has two LXC containers running on it, one of those LXC containers is your e-mail server, the other is a web server:

Example Network That Needs Hairpinning

Example Network That Needs Hairpinning

You might have an IPTables configuration file like the following (if you’re using Ubuntu and have the netfilter-persistent and iptables-persistent packages installed):

###############################################################################
# iptables IPv4 rules for reload on start-up
#
# To Reload:   
#   service netfilter-persistent restart
#
# To Test:
#   iptables-restore -t </etc/iptables/rules.v4
###############################################################################

*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]

# predefine chains
:nat_incoming_tcp - [0:0]

# redirect incoming TCP traffic to chain to NAT to LXC container
-A PREROUTING -i eth+ -p tcp -j nat_incoming_tcp

# NAT for LXC container "email" at 10.0.5.14
-A nat_incoming_tcp -p tcp --dport 25 -j DNAT --to-destination 10.0.5.14:25 -m comment --comment "SMTP"

# NAT for LXC container "web" at 10.0.5.15
-A nat_incoming_tcp -p tcp --dport 80 -j DNAT --to-destination 10.0.5.15:80 -m comment --comment "HTTP"

COMMIT

So far so good – if traffic from the Internet hits your host on 203.0.113.44 port 25 it will be DNAT’d (destination NAT’d) to the email LXC container. And if traffic from the Internet hits your host on 203.0.113.44 port 80 it will be DNAT’d to the web container.

Now – what happens if your web server wants to send an e-mail to yourself? You could program your web server to send all e-mails to 10.0.5.14, the IP address of the LXC container.

However what if your web server wants to send you an e-mail at your external Internet IP address (203.0.113.44)? It might want to do this because it automatically looked up your domain name and was told to contact your external IP address. It’s a problem. If your container tried to contact your external IP address it would send the packet out the lxcbr0 interface (default gateway) and then it might be sent out to the Internet by the server’s default gateway (eth0) – but it would more likely be gobbled as a Martian packet.

The answer is “hairpinning”. The following diagram illustrates the packet flow from a LXC container to another via the external IP address:

Using Hairpin to DNAT Internal Packet to External IP

Using Hairpin to DNAT Internal Packet to External IP

By adding some IPTables rules to the NAT table we can ensure that not only are packets destined for the external IP address NAT’d but they are also masqueraded so that replies follow a healthy path. Note that masquerading can only occur at the POSTROUTING step – so we have to mark the packets coming in from lxcbr0 destined for the external IP address in the PREROUTING step.

You might ask – why do hairpinned packets need to be masqueraded as they go back through the lxcbr0 interface? Consider the “from” address of a packet that gets hairpinned. A packet from the web server would have a “from” address of 10.0.5.15 and a “to” address of 203.0.113.44 initially. After the hairpin the packet would still have a “from” address of 10.0.5.15 and a “to” address of 10.0.5.14. The mail server replies “to” 10.0.5.15 “from” 10.0.5.14. This packet correctly returns to the web server – but the web server doesn’t know anything about a packet “from” 10.0.5.14 – it is expecting to receive a reply from the external IP address of 203.0.113.44! Thus we need to masquerade and force the reply to return to the lxcbr0 interface so the host can use connection tracking to rewrite the addresses and return the reply back from where it came.

# Hairpin
# http://ipset.netfilter.org/iptables-extensions.man.html
-A PREROUTING -i lxcbr+ -p tcp -d 203.0.113.44 -j MARK --set-mark 0x200/0x200
-A PREROUTING -i lxcbr+ -p tcp -d 203.0.113.44 -j nat_incoming_tcp
-A POSTROUTING -o lxcbr+ -m mark --mark 0x200 -j MASQUERADE

Note that because we had a custom chain (“nat_incoming_tcp”) to DNAT packets destined for web or email from the Internet – we can re-use this exact chain for traffic coming in from the LXC bridge interface (lxcbr0) as well.

The rules all together look like:

###############################################################################
# iptables IPv4 rules for reload on start-up
#
# To Reload:   
#   service netfilter-persistent restart
#
# To Test:
#   iptables-restore -t </etc/iptables/rules.v4
###############################################################################

*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]

# predefine chains
:nat_incoming_tcp - [0:0]

# redirect Internet TCP traffic to chain to NAT to appropriate LXC container
-A PREROUTING -i eth+ -p tcp -j nat_incoming_tcp

##################
# Hairpin from LXC
##################
# http://ipset.netfilter.org/iptables-extensions.man.html
-A PREROUTING -i lxcbr+ -p tcp -d 203.0.113.44 -j MARK --set-mark 0x200/0x200
-A PREROUTING -i lxcbr+ -p tcp -d 203.0.113.44 -j nat_incoming_tcp
-A POSTROUTING -o lxcbr+ -m mark --mark 0x200 -j MASQUERADE

###############
# LXC NAT rules
###############
# NAT for LXC container "email" at 10.0.5.14
-A nat_incoming_tcp -p tcp --dport 25 -j DNAT --to-destination 10.0.5.14:25 -m comment --comment "SMTP"

# NAT for LXC container "web" at 10.0.5.15
-A nat_incoming_tcp -p tcp --dport 80 -j DNAT --to-destination 10.0.5.15:80 -m comment --comment "HTTP"

COMMIT

Android ICS Calendar Appointment Import – Unable to Launch Event

This error, “Unable to launch event”, is often encountered when attempting to open an .ics calendar appointment file from Firefox (e.g. from the “Your Downloads” screen).

Typically you would select an .ics file from the list of files you’ve downloaded:

Select .ics file to open and choose to "Open with Calendar"

Select .ics file to open and choose to “Open with Calendar”

and this would give you the option to “Open with Calendar” to which you might choose “JUST ONCE”.

This would result in the error “Unable to launch event” as per the following screenshot:

Firefox warns "Unable to launch event"

Firefox warns “Unable to launch event”

The Workaround

You can still import your .ics file into Calendar!

Go to your applications list, and choose “Files” (on LineageOS), “My Files” (on Samsung), or whatever your generic file explorer is on your phone. Navigate to your “Downloads” folder. Then open the .ics file from there.

Kubernetes Services clusterIP and externalIPs with IPTables

A basic ClusterIP type service might have the following elements to its description:

me@myhost:~$ kubectl get service my-service -o json
{
    "apiVersion": "v1",
    "kind": "Service",
    ...
    "spec": {
        "clusterIP": "10.41.0.123",
        "ports": [
            {
                "name": "my-service",
                "port": 6556,
                "protocol": "TCP",
                "targetPort": 6556
            }
        ],
        ...
        "type": "ClusterIP"
    },
    ...
}

How does this clusterIP actually work with iptables?

Here’s the secret: the clusterIP (in this case 10.41.0.123) does not belong to any interface! It is a fiction, an illusion.

Let’s say we have 3 worker nodes (a node being a server, real or virtual, that hosts pods) and each worker node has a pod for this service running on it. That might look a little like the following:

Nodes and pods for service

Nodes and pods for service

Nowhere on this picture is the clusterIP (10.41.0.123).

So how does a packet destined for 10.41.0.123 end up at a pod for this service?

The answer is that every single node has a set of iptables NAT table rules that intercept packets destined for the clusterIP and re-write the destination address to that of a pod assigned to the service.

Let’s begin by inspecting the NAT (network address translation) table of iptables on a Kubernetes worker node (only those parts that are of interest will be shown here):

me@myhost:~$ sudo iptables -L -v -n -t nat |less
Chain PREROUTING (policy ACCEPT 19 packets, 1524 bytes)
 pkts bytes target     prot opt in     out     source               destination 
 377M   30G KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain OUTPUT (policy ACCEPT 4 packets, 285 bytes)
 pkts bytes target     prot opt in     out     source               destination
  30M 1834M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/
0            /* kubernetes service portals */

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination 
    0     0 KUBE-SVC-W3DXGYHAQT4XZCGH  tcp  --  *      *       0.0.0.0/0            10.41.0.123          /* my-service: cluster IP */ tcp dpt:6556

Let’s explain what this does. When a packet first arrives at a node on any interface it enters the PREROUTING chain. If a packet originates from the node itself it enters the OUTPUT chain instead. In both cases the packet has not yet been processed by the routing table (see also diagram at this link).

Regardless of whether the packet enters the PREROUTING or OUTPUT chains it gets sent to the KUBE-SERVICES chain. And this chain looks out for service IPs (and ports).

In the case of a TCP packet destined to IP address 10.41.0.123 and port 6556 it then gets sent to the KUBE-SVC-W3DXGYHAQT4XZCGH chain.

What does this KUBE-SVC-W3DXGYHAQT4XZCGH chain look like?

Chain KUBE-SVC-W3DXGYHAQT4XZCGH (1 references)
 pkts bytes target     prot opt in     out     source               destination 
    0     0 KUBE-SEP-TYCQ62MBFETG3WXG  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* my-service:my-service */ statistic mode random probability 0.33332999982
    0     0 KUBE-SEP-GWC5HBFAKQJPM7YT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* my-service:my-service */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-PM3FH7DWZGHX3JK2  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* my-service:my-service */

This chain distributes incoming packets to different pods based on chance. Each rule assigns 1/n probability of matching where n is the number of remaining pods (including the current one) left that can be assigned. So if there are 4 pods the first rule assigns 1/4 probability, the second rule (third-last) assigns 1/3 probability, the third rule (second-last) assigns 1/2 probability, and the last rule assigns 1/1 probability.

Let’s now take a closer look at the KUBE-SEP-TYCQ62MBFETG3WXG chain which does the actual DNAT (destination address re-writing) to a particular pod:

Chain KUBE-SEP-TYCQ62MBFETG3WXG (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* my-service:my-service */ tcp to:10.22.1.101:6556

What this rule does is re-write the destination address of any packet sent to this chain with the IP address of the pod (in this case my-service-a36d22d87-ge3pk, or 10.22.1.101, from the picture above) and the port of the service.

Once the packet has a different destination (the IP address of the pod) it can be routed normally to the pod.

This is how a clusterIP address doesn’t really exist anywhere – yet any packet destined for the clusterIP address that arrives on a worker node will still be delivered to an appropriate pod.

Packet for clusterIP gets destination changed to pod IP

Packet for clusterIP gets destination changed to pod IP

ExternalIPs

So why have externalIPs?

A clusterIP, in theory, is only known to the Kubernetes cluster. It should be an address local to the cluster and, possibly or even probably, automatically assigned. Which is fine for one pod (say, a web server) in a Kubernetes cluster talking to a different service (say, a database) on the same Kubernetes cluster.

You may wish, however, to expose a manually assigned address so that an external (non-Kubernetes) router will forward packets for that address to a node in the cluster.

The mechanics are almost exactly the same for externalIPs as a clusterIP – iptables nat table rules are added and, in fact, in the KUBE-SERVICES chain rules that match the externalIPs simply send packets to exactly the same SVC chain as for the clusterIP for that same service.

A service definition looks very similar with externalIPs:

me@myhost:~$ kubectl get service my-service -o json
{
    "apiVersion": "v1",
    "kind": "Service",
    ...
    "spec": {
        "clusterIP": "10.41.0.123",
        "externalIPs": [
            "192.0.2.16"
        ],
        ...
        "type": "ClusterIP"
    },
    ...
}

Conntrack

So you’ve made a TCP connection to a clusterIP which was DNAT’d (destination network address translated). It was also masqueraded (which means when the packet was forwarded the source address was re-written to the IP address of the node). The masquerade rule is there in the iptables nat table (it is an exercise for the reader to find it, hint, check the POSTROUTING and KUBE-POSTROUTING chains).

The pod receives a packet from the forwarding node destined to itself; so how does a reply from the destination pod make its way back to the original sender?

The answer is conntrack.

Let’s say another pod (from a different service) on Node A with IP address 10.22.1.87 makes a TCP connection to the clusterIP 10.41.0.123 and port 6556 and that gets DNAT’d to pod my-service-a36d22d87-e2kja with IP address 10.22.1.122 on node B. Note that the forwarded packet may be masqueraded with a source IP of Node A’s IP address (172.33.22.14) instead of the originating pod’s IP address (10.22.1.87). We might get a conntrack entry like the following:

me@myhost:~$ sudo cat /proc/net/nf_conntrack # or conntrack -L -n
tcp      6 113 ESTABLISHED src=10.22.1.87 dst=10.41.0.123 sport=58366 dport=6556 src=10.22.1.122 dst=10.22.1.87 sport=6556 dport=58366 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

This conntrack entry stays alive for a short period of time (usually less than a minute while inactive) and knows how to re-map the source address/port (and destination address/port if necessary) of any replies from the host to which the original packet was DNAT’d (and masqueraded).

How to Get a List of All Pods for a Given Service in Kubernetes

Let’s say you have a particular service name and you want to know the names of all the pods for that service.

Start by dumping the YAML configuration of the service to find the “selector”:

me@myhost:~ $ kubectl get service my-service-my -o yaml
...
spec:
  ...
  selector:
    ...
    app: my-service
...

Now you can use that selector to construct a pod query for just that service:

me@myhost:~ $ kubectl get pods --selector app=my-service -o custom-columns=:metadata.name
my-service-my-a24eb5222-4qgx2
my-service-my-a24eb5222-8bgqf
my-service-my-a24eb5222-d4bh2
my-service-my-a24eb5222-hk2vj
my-service-my-a24eb5222-trmcc
my-service-my-a24eb5222-p34m8
my-service-my-a24eb5222-pn3qs
my-service-my-a24eb5222-rjtd6

UK Government Gateway Supports TOTP/Google Authenticator

Well I use FreeOTP but it still works. Government Gateway wants you to install an HMRC app, but you can use Google Authenticator variants such as FreeOTP.

If you choose the HMRC app as your “backup” authentication option from Government Gateway it will show a QR code. This can be scanned directly into the FreeOTP app, if you want to know what the QR code contains then apt-get install zbar-tools, take a screen shot, and run:

me@myhost:~$ zbarimg /tmp/screenshot.png
QR-Code:otpauth://totp/HMRC:HMRC%20Online?secret=ABCDEFGHIJ123456&issuer=HMRC

You will want to make a safe copy of this QR image or the URL it represents in case you need to move that secret to another device for a 30 second TOTP code.

Copy Metadata From Input to Output in FFmpeg

The documentation for the -map_metadata option for FFmpeg reads:

-map_metadata[:metadata_spec_out] infile[:metadata_spec_in] (output,per-metadata)

Set metadata information of the next output file from infile. Note that those are file indices (zero-based), not filenames. Optional metadata_spec_in/out parameters specify, which metadata to copy. A metadata specifier can have the following forms:

  • g – global metadata, i.e. metadata that applies to the whole file
  • s[:stream_spec] – per-stream metadata. stream_spec is a stream specifier as described in the Stream specifiers chapter. In an input metadata specifier, the first matching stream is copied from. In an output metadata specifier, all matching streams are copied to.
  • c:chapter_index – per-chapter metadata. chapter_index is the zero-based chapter index.
  • p:program_index – per-program metadata. program_index is the zero-based program index.
  • If metadata specifier is omitted, it defaults to global.

    By default, global metadata is copied from the first input file, per-stream and per-chapter metadata is copied along with streams/chapters. These default mappings are disabled by creating any mapping of the relevant type. A negative file index can be used to create a dummy mapping that just disables automatic copying.

    For example to copy metadata from the first stream of the input file to global metadata of the output file:

    ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
    

    To do the reverse, i.e. copy global metadata to all audio streams:

    ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
    

    Note that simple 0 would work as well in this example, since global metadata is assumed by default.

So let’s say we had three input streams, 0:0 (video), 0:1 (audio), and 0:2 (subtitle) and we wanted to copy the metadata for all three streams and the global metadata. We could use:

ffmpeg -i input.mov -map 0:0 -map 0:1 -map 0:2
  -map_metadata:g 0:g # take file 0's global metadata and copy to this' global
  -map_metadata:s:0 0:s:0 # take file 0's stream 0 metadata and copy to this' stream 0
  -map_metadata:s:1 0:s:1 # take file 0's stream 1 metadata and copy to this' stream 1
  -map_metadata:s:2 0:s:2 # take file 0's stream 2 metadata and copy to this' stream 2

Subtitle Codec 94213 is Not Supported in FFmpeg

I was attempting to convert a video file from an Ambarella A7L dashcam to the Matroska (mkv) file format but encountered the following error:

...
    Stream #0:2(eng): Subtitle: mov_text (text / 0x74786574), 0 kb/s (default)
    Metadata:
      creation_time   : 2018-06-01 13:05:26
      handler_name    : Ambarella EXT
...
[matroska @ 0x4754d40] Subtitle codec 94213 is not supported.

In order to retain such subtitle codec (using -c:s copy option) better to use the mp4 file format which will allow this subtitle format.

Or, to retain the subtitle information in the Matroska container, it can be converted to the subrip format using the -c:s srt option.

Setting Up OpenVPN Server for Rooted LineageOS Phone

Installation Steps

Install OpenVPN Server

me@server:~$ sudo apt-get install openvpn
The following NEW packages will be installed:
  liblzo2-2 libpkcs11-helper1 openvpn
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 513 kB of archives.
After this operation, 1,353 kB of additional disk space will be used.
me@server:~$ sudo apt-get install openssl
The following NEW packages will be installed:
  openssl
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 492 kB of archives.
After this operation, 956 kB of additional disk space will be used.

Choose a routed VPN (rather than a bridged). Use username/password authentication (still requires server certificate authority (CA) and server certificate.

Edit /etc/default/openvpn and add:

# Ensure tunnel device is available
if [ ! -d /dev/net ]; then
  mkdir /dev/net
fi

if [ -d /dev/net ]; then
  if [ ! -e /dev/net/tun ]; then
    mknod /dev/net/tun c 10 200
    chmod 666 /dev/net/tun
  fi
fi

Create a tunnel device:

me@server:~# openvpn --mktun --dev tun0

Create Certificate Authority

me@server:~# mkdir /etc/openvpn/ssl
me@server:~# cd /etc/openvpn/ssl
me@server:/etc/openvpn/ssl# openssl genrsa -out ca.key 2048
Generating RSA private key, 2048 bit long modulus
............+++
................................................................................................+++
e is 65537 (0x10001)
me@server:/etc/openvpn/ssl# openssl req -new -x509 -key ca.key -out ca.crt

Now we have ca.crt (the certificate authority certificate) and ca.key (the key for the certificate authority).

You can get the text form of the certificate (used later for the client configuration) by issuing the command:

me@server:/etc/openvpn/ssl# openssl x509 -in ca.crt -text

… and extracting the portion between (and including) the BEGIN CERTIFICATE and END CERTIFICATE lines.

Create Server Certificate and Sign Using Certificate Authority

me@server:/etc/openvpn/ssl# openssl genrsa -out signing.key 2048
Generating RSA private key, 2048 bit long modulus
.....+++
............................................................+++
e is 65537 (0x10001)
me@server:/etc/openvpn/ssl# openssl rsa -in signing.key -pubout -out signing.pub
writing RSA key
me@server:/etc/openvpn/ssl# openssl req -new -key signing.key -out request.csr

me@server:/etc/openvpn/ssl# openssl x509 -req -days 1500 -in request.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt
Signature ok
Getting CA Private Key
me@server:/etc/openvpn/ssl# openssl x509 -in server.crt -noout -text

Now we have server.crt (the server certificate).

Create Difie-Hellman Key

me@server:/etc/openvpn/ssl# openssl dhparam -out dh1024.pem 1024

Create Static Shared Key

me@server:/etc/openvpn/ssl# openvpn --genkey --secret static.key

Create auth.pl

(taken from the example at /usr/share/doc/openvpn/examples/sample-scripts/auth-pam.pl)

#!/usr/bin/perl -w

use strict;

# Allowed username/password combinations
my %users = (
    'jim' => 'letmein',
    'mary' => 'password123',
);

# Get username/password from file

my $ARG;
if ($ARG = shift @ARGV) {
    if (!open (UPFILE, "<$ARG")) {
        print "Could not open username/password file: $ARG\n";
        exit 1;
    }
} else {
    print "No username/password file specified on command line\n";
    exit 1;
}

my $username = <UPFILE>;
my $password = <UPFILE>;

if (!$username || !$password) {
    print "Username/password not found in file: $ARG\n";
    exit 1;
}

chomp $username;
chomp $password;

close (UPFILE);

if ( $users{$username} && ( $password eq $users{$username} ) ) {
    exit 0;
} else {
    print "Auth '$username' failed.\n";
    exit 1;
}

Create server.conf

tls-server
port 1194
proto udp
dev tun
mssfix 576
cipher AES-256-CBC

ca /etc/openvpn/ssl/ca.crt
cert /etc/openvpn/ssl/server.crt
key /etc/openvpn/ssl/signing.key
dh /etc/openvpn/ssl/dh1024.pem

tls-auth /etc/openvpn/ssl/static.key 0

script-security 2 # necessary for auth-user-pass-verify
auth-user-pass-verify /etc/openvpn/auth.pl via-file
client-cert-not-required
username-as-common-name

server 10.44.12.0 255.255.255.0 # subnet for clients
keepalive 10 120
comp-lzo
persist-key
persist-tun
status /var/log/openvpn-status.log
log /var/log/openvpn.log
verb 3

push "redirect-gateway def1"
push "dhcp-option DNS 1.1.1.1" # cloudflare DNS

Enabling/Starting/Stopping OpenVPN

me@server:~# systemctl enable openvpn@server.service
me@server:~# systemctl start openvpn@server.service
me@server:~# systemctl stop openvpn@server.service

Create .ovpn Configuration File for OpenVPN Android Client

tls-client
remote 192.0.2.44 # the IP address of my OpenVPN server
port 1194
proto udp
comp-lzo
auth-user-pass
key-direction 1
mssfix 576
cipher AES-256-CBC

<ca>
-----BEGIN CERTIFICATE-----
...
...
...
-----END CERTIFICATE-----
</ca>

<tls-auth>
-----BEGIN OpenVPN Static key V1-----
...
...
...
-----END OpenVPN Static key V1-----
</tls-auth>

On The Phone

Install “OpenVPN Connect – Fast & Safe SSL VPN Client” by “OpenVPN”.

Then add .ovpn file created above.

Install iptables Scripts

me@server:~# apt-get install iptables-persistent
The following NEW packages will be installed:
  iptables iptables-persistent libnfnetlink0 netfilter-persistent
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 292 kB of archives.
After this operation, 1,804 kB of additional disk space will be used.

Edit /etc/iptables/rules.v4:

###############################################################################
# iptables IPv4 rules for reload on start-up
#
# To Reload:
#   service netfilter-persistent restart
#
# To Test:
#   iptables-restore -t </etc/iptables/rules.v4
###############################################################################

*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT

*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]

-A POSTROUTING -s 10.44.12.0/24 -o eth0 -j MASQUERADE

COMMIT

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT

Issues

external program fork failed

If you see something like the following:

Wed Jun 20 03:33:31 2018 192.0.2.41:39523 WARNING: External program may not be called unless '--script-security 2' or higher is enabled. See --help text or man page for detailed info.
Wed Jun 20 03:33:31 2018 192.0.2.41:39523 WARNING: Failed running command (--auth-user-pass-verify): external program fork failed

… then you probably have a “script-security 1” parameter somewhere in your configuration (after any other script-security 2 directives). Grep for it and comment it out so that your “script-security 2” directive can take effect.

See Also